Note

Run interactively in jupyter notebook.

# Quantum Neural Networks¶

This notebook demonstrates the different generic quantum neural network (QNN) implementations provided in Qiskit Machine Learning. The networks are meant as application-agnostic computational units that can be used for many different use cases. Depending on the application, a particular type of network might more or less suitable and might require to be set up in a particular way. The following different available neural networks will now be discussed in more detail:

`NeuralNetwork`

: The interface for neural networks.`OpflowQNN`

: A network based on the evaluation of quantum mechanical observables.`TwoLayerQNN`

: A special`OpflowQNN`

implementation for convenience.`CircuitQNN`

: A network based on the samples resulting from measuring a quantum circuit.

```
[1]:
```

```
import numpy as np
from qiskit import Aer, QuantumCircuit
from qiskit.circuit import Parameter
from qiskit.circuit.library import RealAmplitudes, ZZFeatureMap
from qiskit.opflow import StateFn, PauliSumOp, AerPauliExpectation, ListOp, Gradient
from qiskit.utils import QuantumInstance
```

```
[2]:
```

```
# set method to calculcate expected values
expval = AerPauliExpectation()
# define gradient method
gradient = Gradient()
# define quantum instances (statevector and sample based)
qi_sv = QuantumInstance(Aer.get_backend('statevector_simulator'))
# we set shots to 10 as this will determine the number of samples later on.
qi_qasm = QuantumInstance(Aer.get_backend('qasm_simulator'), shots=10)
```

## 1. `NeuralNetwork`

¶

The `NeuralNetwork`

represents the interface for all neural networks available in Qiskit Machine Learning. It exposes a forward and a backward pass taking the data samples and trainable weights as input. A `NeuralNetwork`

does not contain any training capabilities, these are pushed to the actual algorithms / applications. Thus, a `NeuralNetwork`

also does not store the values for trainable weights. In the following, different implementations of this interfaces are introduced.

Suppose a `NeuralNetwork`

called `nn`

. Then, the `nn.forward(input, weights)`

pass takes either flat inputs for the data and weights of size `nn.num_inputs`

and `nn.num_weights`

, respectively. `NeuralNetwork`

supports batching of inputs and returns batches of output of the corresponding shape.

## 2. `OpflowQNN`

¶

The `OpflowQNN`

takes a (parametrized) operator from Qiskit and leverages Qiskit’s gradient framework to provide the backward pass. Such an operator can for instance be an expected value of a quantum mechanical observable with respect to a parametrized quantum state. The Parameters can be used to load classical data as well as represent trainable weights. The `OpflowQNN`

also allows lists of operators and more complex structures to construct more complex QNNs.

```
[3]:
```

```
from qiskit_machine_learning.neural_networks import OpflowQNN
```

```
[4]:
```

```
# construct parametrized circuit
params1 = [Parameter('input1'), Parameter('weight1')]
qc1 = QuantumCircuit(1)
qc1.h(0)
qc1.ry(params1[0], 0)
qc1.rx(params1[1], 0)
qc_sfn1 = StateFn(qc1)
# construct cost operator
H1 = StateFn(PauliSumOp.from_list([('Z', 1.0), ('X', 1.0)]))
# combine operator and circuit to objective function
op1 = ~H1 @ qc_sfn1
print(op1)
```

```
ComposedOp([
OperatorMeasurement(1.0 * Z
+ 1.0 * X),
CircuitStateFn(
┌───┐┌────────────┐┌─────────────┐
q_0: ┤ H ├┤ RY(input1) ├┤ RX(weight1) ├
└───┘└────────────┘└─────────────┘
)
])
```

```
[5]:
```

```
# construct OpflowQNN with the operator, the input parameters, the weight parameters,
# the expected value, gradient, and quantum instance.
qnn1 = OpflowQNN(op1, [params1[0]], [params1[1]], expval, gradient, qi_sv)
```

```
[6]:
```

```
# define (random) input and weights
input1 = np.random.rand(qnn1.num_inputs)
weights1 = np.random.rand(qnn1.num_weights)
```

```
[7]:
```

```
# QNN forward pass
qnn1.forward(input1, weights1)
```

```
[7]:
```

```
array([[0.61577125]])
```

```
[8]:
```

```
# QNN batched forward pass
qnn1.forward([input1, input1], weights1)
```

```
[8]:
```

```
array([[0.61577125],
[0.61577125]])
```

```
[9]:
```

```
# QNN backward pass
qnn1.backward(input1, weights1)
```

```
[9]:
```

```
(array([[[-1.2283828]]]), array([[[0.11482225]]]))
```

```
[10]:
```

```
# QNN batched backward pass
qnn1.backward([input1, input1], weights1)
```

```
[10]:
```

```
(array([[[-1.2283828]],
[[-1.2283828]]]),
array([[[0.11482225]],
[[0.11482225]]]))
```

Combining multiple observables in a `ListOp`

also allows to create more complex QNNs

```
[11]:
```

```
op2 = ListOp([op1, op1])
qnn2 = OpflowQNN(op2, [params1[0]], [params1[1]], expval, gradient, qi_sv)
```

```
[12]:
```

```
# QNN forward pass
qnn2.forward(input1, weights1)
```

```
[12]:
```

```
array([[0.61577125, 0.61577125]])
```

```
[13]:
```

```
# QNN backward pass
qnn2.backward(input1, weights1)
```

```
[13]:
```

```
(array([[[-1.2283828],
[-1.2283828]]]),
array([[[0.11482225],
[0.11482225]]]))
```

## 3. `TwoLayerQNN`

¶

The `TwoLayerQNN`

is a special `OpflowQNN`

on \(n\) qubits that consists of first a feature map to insert data and second an ansatz that is trained. The default observable is \(Z^{\otimes n}\), i.e., parity.

```
[14]:
```

```
from qiskit_machine_learning.neural_networks import TwoLayerQNN
```

```
[15]:
```

```
# specify the number of qubits
num_qubits = 3
```

```
[16]:
```

```
# specify the feature map
fm = ZZFeatureMap(num_qubits, reps=2)
fm.draw(output='mpl')
```

```
[16]:
```

```
[17]:
```

```
# specify the ansatz
ansatz = RealAmplitudes(num_qubits, reps=1)
ansatz.draw(output='mpl')
```

```
[17]:
```

```
[18]:
```

```
# specify the observable
observable = PauliSumOp.from_list([('Z'*num_qubits, 1)])
print(observable)
```

```
1.0 * ZZZ
```

```
[19]:
```

```
# define two layer QNN
qnn3 = TwoLayerQNN(num_qubits,
feature_map=fm,
ansatz=ansatz,
observable=observable, quantum_instance=qi_sv)
```

```
[20]:
```

```
# define (random) input and weights
input3 = np.random.rand(qnn3.num_inputs)
weights3 = np.random.rand(qnn3.num_weights)
```

```
[21]:
```

```
# QNN forward pass
qnn3.forward(input3, weights3)
```

```
[21]:
```

```
array([[-0.01364088]])
```

```
[22]:
```

```
# QNN backward pass
qnn3.backward(input3, weights3)
```

```
[22]:
```

```
(array([[[ 0.43231468, -2.82387609, -3.38152983]]]),
array([[[-0.00446178, 0.21554716, -0.6191685 , -0.14793611,
0.09808477, 0.82553288]]]))
```

## 4. `CircuitQNN`

¶

The `CircuitQNN`

is based on a (parametrized) `QuantumCircuit`

. This can take input as well as weight parameters and produces samples from the measurement. The samples can either be interpreted as probabilities of measuring the integer index corresponding to a bitstring or directly as a batch of binary output. In the case of probabilities, gradients can be estimated efficiently and the `CircuitQNN`

provides a backward pass as well. In case of samples, differentiation is not possible and
the backward pass returns `(None, None)`

.

Further, the `CircuitQNN`

allows to specify an `interpret`

function to post-process the samples. This is expected to take a measured integer (from a bitstring) and map it to a new index, i.e. non-negative integer. In this case, the output shape needs to be provided and the probabilities are agregated accordingly.

A `CircuitQNN`

can be configured to return sparse as well as dense probability vectors. If no `interpret`

function is used, the dimension of the probability vector scales exponentially with the number of qubits and a sparse recommendation is usually recommended. In case of an `interpret`

function it depends on the expected outcome. If, for instance, an index is mapped to the parity of the corresponding bitstring, i.e., to 0 or 1, a dense output makes sense and the result will be a
probability vector of length 2.

```
[23]:
```

```
from qiskit_machine_learning.neural_networks import CircuitQNN
```

```
[24]:
```

```
qc = RealAmplitudes(num_qubits, entanglement='linear', reps=1)
qc.draw(output='mpl')
```

```
[24]:
```

### 4.1 Output: sparse integer probabilities¶

```
[25]:
```

```
# specify circuit QNN
qnn4 = CircuitQNN(qc, [], qc.parameters, sparse=True, quantum_instance=qi_qasm)
```

```
[26]:
```

```
# define (random) input and weights
input4 = np.random.rand(qnn4.num_inputs)
weights4 = np.random.rand(qnn4.num_weights)
```

```
[27]:
```

```
# QNN forward pass
qnn4.forward(input4, weights4).todense() # returned as a sparse matrix
```

```
[27]:
```

```
array([[0.9, 0. , 0. , 0. , 0.1, 0. , 0. , 0. ]])
```

```
[28]:
```

```
# QNN backward pass, returns a tuple of sparse matrices
qnn4.backward(input4, weights4)
```

```
[28]:
```

```
(<COO: shape=(1, 8, 0), dtype=float64, nnz=0, fill_value=0.0>,
<COO: shape=(1, 8, 6), dtype=float64, nnz=25, fill_value=0.0>)
```

### 4.2 Output: dense parity probabilities¶

```
[29]:
```

```
# specify circuit QNN
parity = lambda x: '{:b}'.format(x).count('1') % 2
output_shape = 2 # this is required in case of a callable with dense output
qnn6 = CircuitQNN(qc, [], qc.parameters, sparse=False, interpret=parity, output_shape=output_shape,
quantum_instance=qi_qasm)
```

```
[30]:
```

```
# define (random) input and weights
input6 = np.random.rand(qnn6.num_inputs)
weights6 = np.random.rand(qnn6.num_weights)
```

```
[31]:
```

```
# QNN forward pass
qnn6.forward(input6, weights6)
```

```
[31]:
```

```
array([[0.4, 0.6]])
```

```
[32]:
```

```
# QNN backward pass
qnn6.backward(input6, weights6)
```

```
[32]:
```

```
(array([], shape=(1, 2, 0), dtype=float64),
array([[[-0.2 , 0.05, -0.2 , -0.4 , 0.05, 0. ],
[ 0.2 , -0.05, 0.2 , 0.4 , -0.05, 0. ]]]))
```

### 4.3 Output: Samples¶

```
[33]:
```

```
# specify circuit QNN
qnn7 = CircuitQNN(qc, [], qc.parameters, sampling=True,
quantum_instance=qi_qasm)
```

```
[34]:
```

```
# define (random) input and weights
input7 = np.random.rand(qnn7.num_inputs)
weights7 = np.random.rand(qnn7.num_weights)
```

```
[35]:
```

```
# QNN forward pass, results in samples of measured bit strings mapped to integers
qnn7.forward(input7, weights7)
```

```
[35]:
```

```
array([[[0.],
[4.],
[4.],
[0.],
[1.],
[0.],
[0.],
[0.],
[0.],
[4.]]])
```

```
[36]:
```

```
# QNN backward pass
qnn7.backward(input7, weights7)
```

```
[36]:
```

```
(None, None)
```

### 4.4 Output: Parity Samples¶

```
[37]:
```

```
# specify circuit QNN
qnn8 = CircuitQNN(qc, [], qc.parameters, sampling=True, interpret=parity,
quantum_instance=qi_qasm)
```

```
[38]:
```

```
# define (random) input and weights
input8 = np.random.rand(qnn8.num_inputs)
weights8 = np.random.rand(qnn8.num_weights)
```

```
[39]:
```

```
# QNN forward pass, results in samples of measured bit strings
qnn8.forward(input8, weights8)
```

```
[39]:
```

```
array([[[0.],
[0.],
[1.],
[0.],
[0.],
[1.],
[0.],
[0.],
[0.],
[0.]]])
```

```
[40]:
```

```
# QNN backward pass
qnn8.backward(input8, weights8)
```

```
[40]:
```

```
(None, None)
```

```
[41]:
```

```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```

### Version Information

Qiskit Software | Version |
---|---|

Qiskit | None |

Terra | 0.17.0 |

Aer | 0.8.0 |

Ignis | None |

Aqua | None |

IBM Q Provider | None |

System information | |

Python | 3.8.8 (default, Feb 19 2021, 19:42:00) [GCC 9.3.0] |

OS | Linux |

CPUs | 2 |

Memory (Gb) | 6.791343688964844 |

Fri Apr 02 20:37:03 2021 UTC |

### This code is a part of Qiskit

© Copyright IBM 2017, 2021.

This code is licensed under the Apache License, Version 2.0. You may

obtain a copy of this license in the LICENSE.txt file in the root directory

of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.

Any modifications or derivative works of this code must retain this

copyright notice, and modified files need to carry a notice indicating

that they have been altered from the originals.

```
[ ]:
```

```
```