Korean
언어
English
Bengali
French
Hindi
Italian
Japanese
Korean
Malayalam
Russian
Spanish
Tamil
Turkish
Vietnamese
Shortcuts

참고

이 페이지는 docs/tutorials/02_neural_network_classifier_and_regressor.ipynb 에서 생성되었다.

신경망 분류기 & 회귀기

이 튜토리얼에서는 NeuralNetworkClassifierNeuralNetworkRegressor 가 어떻게 사용되는지를 보여준다. 두 가지 모두 (양자) NeuralNetwork 을 입력하고 이를 특정한 문맥에서 활용한다. 두 가지 모두 편의상 사전 구성된 Variational Quantum Classifier (VQC) 와 Variational Quantum Regressor (VQR) 도 제공한다. 이 튜토리얼은 다음과 같이 구성된다.

  1. 분류

    • OpflowQNN 를 이용한 분류

    • CircuitQNN 를 이용한 분류

    • 변분 양자분류기 (VQC, Variational Quantum Classifier)

  2. 회귀

    • OpflowQNN 을 통한 회귀

    • 변분 양자회귀기 (VQR, Variational Quantum Regressor)

[1]:
import numpy as np
import matplotlib.pyplot as plt

from qiskit import Aer, QuantumCircuit
from qiskit.opflow import Z, I, StateFn
from qiskit.utils import QuantumInstance
from qiskit.circuit import Parameter
from qiskit.circuit.library import RealAmplitudes, ZZFeatureMap
from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B

from qiskit_machine_learning.neural_networks import TwoLayerQNN, CircuitQNN
from qiskit_machine_learning.algorithms.classifiers import NeuralNetworkClassifier, VQC
from qiskit_machine_learning.algorithms.regressors import NeuralNetworkRegressor, VQR

from typing import Union

from qiskit_machine_learning.exceptions import QiskitMachineLearningError

from IPython.display import clear_output
[2]:
quantum_instance = QuantumInstance(Aer.get_backend('aer_simulator'), shots=1024)

분류

다음 알고리즘을 설명하기 위해 간단한 분류 데이터 셋을 준비한다.

[3]:
num_inputs = 2
num_samples = 20
X = 2*np.random.rand(num_samples, num_inputs) - 1
y01 = 1*(np.sum(X, axis=1) >= 0)  # in { 0,  1}
y = 2*y01-1                       # in {-1, +1}
y_one_hot = np.zeros((num_samples, 2))
for i in range(num_samples):
    y_one_hot[i, y01[i]] = 1

for x, y_target in zip(X, y):
    if y_target == 1:
        plt.plot(x[0], x[1], 'bo')
    else:
        plt.plot(x[0], x[1], 'go')
plt.plot([-1, 1], [1, -1], '--', color='black')
plt.show()
../_images/tutorials_02_neural_network_classifier_and_regressor_4_0.png

OpflowQNN 를 이용한 분류

먼저 OpflowQNN 가 어떻게 NeuralNetworkClassifier 에서 분류하는데 사용될 수 있는지 살펴보자. 이런 맥락에서 OpflowQNN\([-1, +1]\) 에서 1차원 출력을 반환할 것으로 예상된다. 이는 이진 분류에서만 작동하며 두 클래스를 \(\{-1, +1\}\) 에 지정한다. 편의상, 특징맵과 ansatz를 통해 정의된 OpflowQNN 의 특별한 유형인 TwoLayerQNN 을 사용한다.

[4]:
# construct QNN
opflow_qnn = TwoLayerQNN(num_inputs, quantum_instance=quantum_instance)
[5]:
# QNN maps inputs to [-1, +1]
opflow_qnn.forward(X[0, :], np.random.rand(opflow_qnn.num_weights))
[5]:
array([[0.47265625]])

We will add a callback function called callback_graph. This will be called for each iteration of the optimizer and will be passed two parameters: the current weights and the value of the objective function at those weights. For our function, we append the value of the objective function to an array so we can plot iteration versus objective function value and update the graph with each iteration. However, you can do whatever you want with a callback function as long as it gets the two parameters mentioned passed.

[6]:
# callback function that draws a live plot when the .fit() method is called
def callback_graph(weights, obj_func_eval):
    clear_output(wait=True)
    objective_func_vals.append(obj_func_eval)
    plt.title("Objective function value against iteration")
    plt.xlabel("Iteration")
    plt.ylabel("Objective function value")
    plt.plot(range(len(objective_func_vals)), objective_func_vals)
    plt.show()
[7]:
# construct neural network classifier
opflow_classifier = NeuralNetworkClassifier(opflow_qnn, optimizer=COBYLA(), callback=callback_graph)
[8]:
# create empty array for callback to store evaluations of the objective function
objective_func_vals = []
plt.rcParams["figure.figsize"] = (12, 6)

# fit classifier to data
opflow_classifier.fit(X, y)

# return to default figsize
plt.rcParams["figure.figsize"] = (6, 4)

# score classifier
opflow_classifier.score(X, y)
../_images/tutorials_02_neural_network_classifier_and_regressor_11_0.png
[8]:
0.5
[9]:
# evaluate data points
y_predict = opflow_classifier.predict(X)

# plot results
# red == wrongly classified
for x, y_target, y_p in zip(X, y, y_predict):
    if y_target == 1:
        plt.plot(x[0], x[1], 'bo')
    else:
        plt.plot(x[0], x[1], 'go')
    if y_target != y_p:
        plt.scatter(x[0], x[1], s=200, facecolors='none', edgecolors='r', linewidths=2)
plt.plot([-1, 1], [1, -1], '--', color='black')
plt.show()
../_images/tutorials_02_neural_network_classifier_and_regressor_12_0.png

CircuitQNN 를 이용한 분류

다음으로 우리는 어떻게 CircuitQNNNeuralNetworkClassifier 에서 분류 문제에 사용될 수 있는지 알아볼 것이다. 여기서 CircuitQNN 이라 하면 \(d\) - 차원 확률 벡터를 출력으로 반환하며, \(d\) 는 카테고리(class)의 수에 해당한다. QuantumCircuit 을 통해 표본추출을 하면 자동적으로 확률 분포를 생성되기 때문에 우리는 그저 측정된 비트열에서 다른 카테고리로 이어지는 매핑을 정의하면 된다. 이진 분류 문제의 경우 패리티 매핑이 사용된다.

[10]:
# construct feature map
feature_map = ZZFeatureMap(num_inputs)

# construct ansatz
ansatz = RealAmplitudes(num_inputs, reps=1)

# construct quantum circuit
qc = QuantumCircuit(num_inputs)
qc.append(feature_map, range(num_inputs))
qc.append(ansatz, range(num_inputs))
qc.decompose().draw(output='mpl')
[10]:
../_images/tutorials_02_neural_network_classifier_and_regressor_14_0.png
[11]:
# parity maps bitstrings to 0 or 1
def parity(x):
    return '{:b}'.format(x).count('1') % 2
output_shape = 2  # corresponds to the number of classes, possible outcomes of the (parity) mapping.
[12]:
# construct QNN
circuit_qnn = CircuitQNN(circuit=qc,
                         input_params=feature_map.parameters,
                         weight_params=ansatz.parameters,
                         interpret=parity,
                         output_shape=output_shape,
                         quantum_instance=quantum_instance)
[13]:
# construct classifier
circuit_classifier = NeuralNetworkClassifier(neural_network=circuit_qnn,
                                             optimizer=COBYLA(),
                                             callback=callback_graph)
[14]:
# create empty array for callback to store evaluations of the objective function
objective_func_vals = []
plt.rcParams["figure.figsize"] = (12, 6)

# fit classifier to data
circuit_classifier.fit(X, y01)

# return to default figsize
plt.rcParams["figure.figsize"] = (6, 4)

# score classifier
circuit_classifier.score(X, y01)
../_images/tutorials_02_neural_network_classifier_and_regressor_18_0.png
[14]:
0.65
[15]:
# evaluate data points
y_predict = circuit_classifier.predict(X)

# plot results
# red == wrongly classified
for x, y_target, y_p in zip(X, y01, y_predict):
    if y_target == 1:
        plt.plot(x[0], x[1], 'bo')
    else:
        plt.plot(x[0], x[1], 'go')
    if y_target != y_p:
        plt.scatter(x[0], x[1], s=200, facecolors='none', edgecolors='r', linewidths=2)
plt.plot([-1, 1], [1, -1], '--', color='black')
plt.show()
../_images/tutorials_02_neural_network_classifier_and_regressor_19_0.png

변분 양자분류기 (VQC, Variational Quantum Classifier)

VQCCircuitQNN 와 마찬가지로 NeuralNetworkClassifier 의 특별한 변종이다. 이는 비트열에서 분류로의 매핑을 위한 패리티 매핑(또는 여러 클래스들의 확장)을 제공하는데, 이는 확률 벡터를 생성하며 원-핫 인코딩으로써 해석된다. 기본적으로 이는 원-핫 인코딩된 형식으로 지정된 레이블을 예상하는 CrossEntropyLoss 기능을 적용하여 해당 형식으로 예측 결과를 반환한다.

[16]:
# construct feature map, ansatz, and optimizer
feature_map = ZZFeatureMap(num_inputs)
ansatz = RealAmplitudes(num_inputs, reps=1)

# construct variational quantum classifier
vqc = VQC(feature_map=feature_map,
          ansatz=ansatz,
          loss='cross_entropy',
          optimizer=COBYLA(),
          quantum_instance=quantum_instance,
          callback=callback_graph)
[17]:
# create empty array for callback to store evaluations of the objective function
objective_func_vals = []
plt.rcParams["figure.figsize"] = (12, 6)

# fit classifier to data
vqc.fit(X, y_one_hot)

# return to default figsize
plt.rcParams["figure.figsize"] = (6, 4)

# score classifier
vqc.score(X, y_one_hot)
../_images/tutorials_02_neural_network_classifier_and_regressor_22_0.png
[17]:
0.6
[18]:
# evaluate data points
y_predict = vqc.predict(X)

# plot results
# red == wrongly classified
for x, y_target, y_p in zip(X, y_one_hot, y_predict):
    if y_target[0] == 1:
        plt.plot(x[0], x[1], 'bo')
    else:
        plt.plot(x[0], x[1], 'go')
    if not np.all(y_target == y_p):
        plt.scatter(x[0], x[1], s=200, facecolors='none', edgecolors='r', linewidths=2)
plt.plot([-1, 1], [1, -1], '--', color='black')
plt.show()
../_images/tutorials_02_neural_network_classifier_and_regressor_23_0.png

회귀

다음 알고리즘을 설명하기 위해 간단한 회귀 데이터셋을 준비한다.

[19]:
num_samples = 20
eps = 0.2
lb, ub = -np.pi, np.pi
X_ = np.linspace(lb, ub, num=50).reshape(50, 1)
f = lambda x: np.sin(x)

X = (ub - lb)*np.random.rand(num_samples, 1) + lb
y = f(X[:,0]) + eps*(2*np.random.rand(num_samples)-1)

plt.plot(X_, f(X_), 'r--')
plt.plot(X, y, 'bo')
plt.show()
../_images/tutorials_02_neural_network_classifier_and_regressor_25_0.png

OpflowQNN 을 통한 회귀

여기서는 \([-1, +1]\) 의 값을 반환하는 OpflowQNN 를 통한 회귀로 케이스를 제한한다. CircuitQNN 을 기반으로 하여 보다 복잡한 다차원 모델을 구성할 수도 있지만 이 튜토리얼의 범위에서 벗어나기 때문에 다루지 않는다.

[20]:
# construct simple feature map
param_x = Parameter('x')
feature_map = QuantumCircuit(1, name='fm')
feature_map.ry(param_x, 0)

# construct simple ansatz
param_y = Parameter('y')
ansatz = QuantumCircuit(1, name='vf')
ansatz.ry(param_y, 0)

# construct QNN
regression_opflow_qnn = TwoLayerQNN(1, feature_map, ansatz, quantum_instance=quantum_instance)
[21]:
# construct the regressor from the neural network
regressor = NeuralNetworkRegressor(neural_network=regression_opflow_qnn,
                                   loss='l2',
                                   optimizer=L_BFGS_B(),
                                   callback=callback_graph)
[22]:
# create empty array for callback to store evaluations of the objective function
objective_func_vals = []
plt.rcParams["figure.figsize"] = (12, 6)

# fit to data
regressor.fit(X, y)

# return to default figsize
plt.rcParams["figure.figsize"] = (6, 4)

# score the result
regressor.score(X, y)
../_images/tutorials_02_neural_network_classifier_and_regressor_29_0.png
[22]:
0.9696567935175576
[23]:
# plot target function
plt.plot(X_, f(X_), 'r--')

# plot data
plt.plot(X, y, 'bo')

# plot fitted line
y_ = regressor.predict(X_)
plt.plot(X_, y_, 'g-')
plt.show()
../_images/tutorials_02_neural_network_classifier_and_regressor_30_0.png

Variational Quantum Regressor (VQR) 를 이용한 회귀

분류에서 VQC 와 유사하게, VQROpflowQNN 와 더불어 NeuralNetworkRegressor 의 특별한 변종이다. 기본적으로 L2Loss 함수를 고려하여 예측과 목표 사이의 평균 제곱 오차를 최소화한다.

[24]:
vqr = VQR(feature_map=feature_map,
          ansatz=ansatz,
          optimizer=L_BFGS_B(),
          quantum_instance=quantum_instance,
          callback=callback_graph)
[25]:
# create empty array for callback to store evaluations of the objective function
objective_func_vals = []
plt.rcParams["figure.figsize"] = (12, 6)

# fit regressor
vqr.fit(X, y)

# return to default figsize
plt.rcParams["figure.figsize"] = (6, 4)

# score result
vqr.score(X, y)
../_images/tutorials_02_neural_network_classifier_and_regressor_33_0.png
[25]:
0.9684356876095139
[26]:
# plot target function
plt.plot(X_, f(X_), 'r--')

# plot data
plt.plot(X, y, 'bo')

# plot fitted line
y_ = vqr.predict(X_)
plt.plot(X_, y_, 'g-')
plt.show()
../_images/tutorials_02_neural_network_classifier_and_regressor_34_0.png
[27]:
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright

Version Information

Qiskit SoftwareVersion
qiskit-terra0.19.0.dev0+803bd0d
qiskit-aer0.8.2
qiskit-machine-learning0.3.0
System information
Python3.9.6 (default, Aug 18 2021, 15:44:49) [MSC v.1916 64 bit (AMD64)]
OSWindows
CPUs4
Memory (Gb)11.83804702758789
Sun Aug 29 01:09:16 2021 Hora de verano romance

This code is a part of Qiskit

© Copyright IBM 2017, 2021.

This code is licensed under the Apache License, Version 2.0. You may
obtain a copy of this license in the LICENSE.txt file in the root directory
of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.

Any modifications or derivative works of this code must retain this
copyright notice, and modified files need to carry a notice indicating
that they have been altered from the originals.