Release Notes¶
0.4.0¶
New Features¶
In the previous releases at the backpropagation stage of
CircuitQNN
andOpflowQNN
gradients were computed for each sample in a dataset individually and then the obtained values were aggregated into one output array. Thus, for each sample in a dataset at least one job was submitted. Now, gradients are computed for all samples in a dataset in one go by passing a list of values for a single parameter toCircuitSampler
. Therefore, a number of jobs required for such computations is significantly reduced. This improvement may speed up training process in the cloud environment, where queue time for submitting a job may be a major contribution in the overall training time.
Introduced two new classes,
EffectiveDimension
andLocalEffectiveDimension
, for calculating the capacity of quantum neural network models through the computation of the Fisher Information Matrix. The local effective dimension bounds the generalization error of QNNs and only accepts single parameter sets as inputs. The global effective dimension (or just effective dimension) can be used as a measure of the expressibility of the model, and accepts multiple parameter sets.
Objective functions constructed by the neural network classifiers and regressors now include an averaging factor that is evaluated as
1 / number_of_samples
. Computed averaged objective values are passed to a user specified callback if any. Users may notice a dramatic decrease in the objective values in their callbacks. This is due to this averaging factor.
Added support for saving and loading machine learning models. This support is introduced in
TrainableModel
, so all sub-classes can be saved and loaded. Also, kernel based models can be saved and loaded. A list of models that support saving and loading models:NeuralNetworkClassifier
NeuralNetworkRegressor
VQC
VQR
QSVC
QSVR
PegasosQSVC
When model is saved all model parameters are saved to a file, including a quantum instance that is referenced by internal objects. That means if a model is loaded from a file and is used, for instance, for inference, the same quantum instance and a corresponding backend will be used even if a cloud backend was used.
Added a new feature in
CircuitQNN
that ensures unbound_pass_manager is called when caching the QNN circuit and thatbound_pass_manager
is called when QNN parameters are assigned.
Added a new feature in
QuantumKernel
that ensures the bound_pass_manager is used, when provided via theQuantumInstance
, when transpiling the kernel circuits.
Upgrade Notes¶
Added support for running with Python 3.10. At the the time of the release, Torch didn’t have a python 3.10 version.
The previously deprecated
BaseBackend
class has been removed. It was originally deprecated in the Qiskit Terra 0.18.0 release.
Support for running with Python 3.6 has been removed. To run Machine Learning you need a minimum Python version of 3.7.
Deprecation Notes¶
The functions breast_cancer, digits, gaussian, iris and wine in the datasets module are deprecated and should not be used.
Class
CrossEntropySigmoidLoss
is deprecated and marked for removal.
Removed support of
l2
andl1
values as loss function definitions. Please, useabsolute_error
andsquared_error
respectively.
Bug Fixes¶
Fixes in Ad Hoc dataset. Fixed an
ValueError
whenn=3
is passed toad_hoc_data
. When the value ofn
is not 2 or 3, aValueError
is raised with a message that the only supported values ofn
are 2 and 3.
Previously, VQC would throw an error if trained on batches of data where not all of the target labels that can be found in the full dataset were present. This is because VQC interpreted the number of unique targets in the current batch as the number of classes. Currently, VQC is hard-coded to expect one-hot-encoded targets. Therefore, VQC will now determine the number of classes from the shape of the target array.
Fixes an issue where VQC could not be trained on multiclass datasets. It returned nan values on some iterations. This is fixed in 2 ways. First, the default parity function is now guaranteed to be able to assign at least one output bitstring to each class, so long as 2**N >= C where N is the number of output qubits and C is the number of classes. This guarantees that it is at least possible for every class to be predicted with a non-zero probability. Second, even with this change it is still possible that on a given training instance a class is predicted with 0 probability. Previously this could lead to nan in the CrossEntropyLoss calculation. We now replace 0 probabilities with a small positive value to ensure the loss cannot return nan.
Fixes an issue in
QuantumKernel
where evaluating a quantum kernel for data with dimension d>2 raised an error. This is fixed by changing the hard-coded reshaping of one-dimensional arrays inQuantumKernel.evaluate()
.
Fixes an issue where
VQC
would fail withwarm_start=True
. The extraction of theinitial_point
inTrainableModel
from the final point of the minimization had not been updated to reflect the refactor of optimizers inqiskit-terra
; the oldoptimize
method, that returned a tuple was deprecated and new methodminimize
was created that returns anOptimizerResult
object. We now correctly recover the final point of the minimization from previous fits to use for a warm start in subsequent fits.
Added GPU support to
TorchConnector
. Now, if a hybrid PyTorch model is being trained on GPU,TorchConnector
correctly detaches tensors, moves them to CPU, evaluate forward and backward passes and places resulting tensors to the same device they came from.
Fixed a bug when a sparse array is passed to
VQC
as labels. Sparse arrays can be easily observed when labels are encoded viaOneHotEncoder
fromSciKit-Learn
. Now bothNeuralNetworkClassifier
andVQC
support sparse arrays and convert them dense arrays in the implementation.
0.3.0¶
New Features¶
Addition of a
QuantumKernelTrainer
object which may be used by kernel-based machine learning algorithms to perform optimization of someQuantumKernel
parameters before training the model. Addition of a new base class,KernelLoss
, in theloss_functions
package. Addition of a newKernelLoss
subclass,SVCLoss
.
The class
TrainableModel
, and its sub-classesNeuralNetworkClassifier
,NeuralNetworkRegressor
,VQR
,VQC
, have a new optional argumentcallback
. User can optionally provide a callback function that can access the intermediate training data to track the optimization process, else it defaults toNone
. The callback function takes in two parameters: the weights for the objective function and the computed objective value. For each iteration an optimizer invokes the callback and passes current weights and computed value of the objective function.
Classification models (i.e. models that extend the
NeuralNetworkClassifier
class like VQC) can now handle categorical target data in methods likefit()
andscore()
. Categorical data is inferred from the presence of string type data and is automatically encoded using either one-hot or integer encodings. Encoder type is determined by theone_hot
argument supplied when instantiating the model.
There’s an additional transpilation step introduced in CircuitQNN that is invoked when a quantum instance is set. A circuit passed to
CircuitQNN
is transpiled and saved for subsequent usages. So, every time when the circuit is executed it is already transpiled and overall time of the forward pass is reduced. Due to implementation limitations ofRawFeatureVector
it can’t be transpiled in advance, so it is transpiled every time it is required to be executed and only when all parameters are bound. This means overall performance whenRawFeatureVector
is used stays the same.
Introduced a new classification algorithm, which is an alternative version of the Quantum Support Vector Classifier (QSVC) that is trained via the Pegasos algorithm from https://home.ttic.edu/~nati/Publications/PegasosMPB.pdf instead of the dual optimization problem like in sklearn. This algorithm yields a training complexity that is independent of the size of the training set (see the to be published Master’s Thesis “Comparing Quantum Neural Networks and Quantum Support Vector Machines” by Arne Thomsen), such that the PegasosQSVC is expected to train faster than QSVC for sufficiently large training sets.
QuantumKernel
transpiles all circuits before execution. However, thisinformation was not being passed, which calls the transpiler many times during the execution of the
QSVC/QSVR
algorithm. Now,had_transpiled=True
is passed correctly and the algorithm runs faster.
QuantumKernel
now provides an interface for users to specify a new class field,user_parameters
. User parameters are an array ofParameter
objects corresponding to parameterized quantum gates in the feature map circuit the user wishes to tune. This is useful in algorithms where feature map parameters must be bound and re-bound many times (i.e. variational algorithms). Users may also use a new functionassign_user_parameters
to assign real values to some or all of the user parameters in the feature map.
Introduce the
TorchRuntimeClient
for training a quantum model or a hybrid quantum-classical model faster using Qiskit Runtime. It can also be used for predicting the result using the trained model or calculating the score of the trained model faster using Qiskit Runtime.
Known Issues¶
If positional arguments are passed into QSVR or QSVC and these classes are printed, an exception is raised.
Deprecation Notes¶
Positional arguments in QSVR and QSVC are deprecated.
Bug Fixes¶
Fixed a bug in
QuantumKernel
where for statevector simulator all circuits were constructed and transpiled at once, leading to high memory usage. Now the circuits are batched similarly to how it was previously done for non-statevector simulators (same flag is used for both now; previouslybatch_size
was silently ignored by statevector simulator)
Fix a bug where
TorchConnector
failed on backward pass computation due to empty parameters for inputs or weights. Validation added toqiskit_machine_learning.neural_networks.NeuralNetwork._validate_backward_output()
.
TwoLayerQNN
now passes the value of theexp_val
parameter in the constructor to the constructor ofOpflowNN
whichTwoLayerQNN
inherits from.
In some configurations forward pass of a neural network may return the same value across multiple calls even if different weights are passed. This behavior is confirmed with
AQGD
optimizer. This was due to a bug in the implementation of the objective functions. They cache a value obtained at the forward pass to be re-used in the backward pass. Initially, this cache was based on an identifier (a call of id() function) of the weights array. AQGD re-uses the same array for weights: it updates the values keeping an instance of the array the same. This caused to re-use the same forward pass value across all iteration. Now the forward pass cache is based on actual values of weights instead of identifiers.
Fix a bug, where
qiskit_machine_learning.circuit.library.RawFeatureVector.copy()
didn’t copy all internal settings which could lead to issues with the copied circuit. As a consequenceqiskit_machine_learning.circuit.library.RawFeatureVector.bind_parameters()
is also fixed.
Fixes a bug where
VQC
could not be instantiated unless eitherfeature_map
oransatz
were provided (#217).VQC
is now instantiated with the defaultfeature_map
and/oransatz
.
The QNN weight parameter in TorchConnector is now registered in the torch DAG as
weight
, instead of_weights
. This is consistent with the PyTorch naming convention and theweight
property used to get access to the computed weights.
0.2.0¶
New Features¶
A base class
TrainableModel
is introduced for machine learning models. This class follows Scikit-Learn principles and makes the quantum machine learning compatible with classical models. BothNeuralNetworkClassifier
andNeuralNetworkRegressor
extend this class. A base classObjectiveFunction
is introduced for objective functions optimized by machine learning models. There are three objective functions introduced that are used by ML models:BinaryObjectiveFunction
,MultiClassObjectiveFunction
, andOneHotObjectiveFunction
. These functions are used internally by the models.
The
optimizer
argument for the classesNeuralNetworkClassifier
andNeuralNetworkRegressor
, both of which extends theTrainableModel
class, is made optional with the default value beingSLSQP()
. The same is true for the classesVQC
andVQR
as they inherit fromNeuralNetworkClassifier
andNeuralNetworkRegressor
respectively.
The constructor of
NeuralNetwork
, and all classes that inherit from it, has a new parameterinput_gradients
which defaults to False. Previously this parameter could only be set using the setter method. Note thatTorchConnector
previously setinput_gradients
of theNeuralNetwork
it was instantiated with toTrue
. This is not longer the case. So if you useTorchConnector
and want to compute the gradients w.r.t. the input, make sure you setinput_gradients=True
on theNeuralNetwork
before passing it toTorchConnector
.
Added a parameter
initial_point
to the neural network classifiers and regressors. This an array that is passed to an optimizer as an initial point to start from.
Computation of gradients with respect to input data in the backward method of
NeuralNetwork
is now optional. By default gradients are not computed. They may inspected and turned on, if required, by getting or setting a new propertyinput_gradients
in theNeuralNetwork
class.
Now
NeuralNetworkClassifier
extendsClassifierMixin
andNeuralNetworkRegressor
extendsRegressorMixin
from Scikit-Learn and rely on their methods for score calculation. This also adds an ability to pass sample weights as an optional parameter to the score methods.
Deprecation Notes¶
The valid values passed to the loss argument of the
TrainableModel
constructor were partially deprecated (i.e.loss='l1'
is replaced withloss='absolute_error'
andloss='l2'
is replaced withloss='squared_error'
). This affects instantiation of classes like theNeuralNetworkClassifier
. This change was made to reduce confusion that stems from using lowercase ‘l’ character which can be mistaken for a numeric ‘1’ or capital ‘I’. You should update your model instantiations by replacing ‘l1’ with ‘absolute_error’ and ‘l2’ with ‘squared_error’.
The
weights
property inTorchConnector
is deprecated in favor of theweight
property which is PyTorch compatible. By default, PyTorch layers exposeweight
properties to get access to the computed weights.
Bug Fixes¶
This fixes the exception that occurs when no
optimizer
argument is passed toNeuralNetworkClassifier
andNeuralNetworkRegressor
.
Fixes the computation of gradients in TorchConnector when a batch of input samples is provided.
TorchConnector now returns the correct input gradient dimensions during the backward pass in hybrid nn training.
Added a dedicated handling of
ComposedOp
as a operator inOpflowQNN
. In this case output shape is determined from the first operator in theComposedOp
instance.
Fix the dimensions of the gradient in the quantum generator for the qGAN training.