Compatibility fix to support Python 3.11.
qiskit_machine_learning.datasets.discretize_and_truncate()is fixed on
numpyversion 1.24. This function is used by the QGAN implementation.
Allow callable as an optimizer in
VQR, as well as in
Now, the optimizer can either be one of Qiskit’s optimizers, such as
SPSAor a callable with the following signature:
from qiskit.algorithms.optimizers import OptimizerResult def my_optimizer(fun, x0, jac=None, bounds=None) -> OptimizerResult: # Args: # fun (callable): the function to minimize # x0 (np.ndarray): the initial point for the optimization # jac (callable, optional): the gradient of the objective function # bounds (list, optional): a list of tuples specifying the parameter bounds result = OptimizerResult() result.x = # optimal parameters result.fun = # optimal function value return result
The above signature also allows to directly pass any SciPy minimizer, for instance as
from functools import partial from scipy.optimize import minimize optimizer = partial(minimize, method="L-BFGS-B")
Added a new
FidelityStatevectorKernelclass that is optimized to use only statevector-implemented feature maps. Therefore, computational complexity is reduced from \(O(N^2)\) to \(O(N)\).
Computed statevector arrays are also cached to further increase efficiency. This cache is cleared when the
evaluatemethod is called, unless
False. The cache is unbounded by default, but its size can be set by the user, i.e., limited to the number of samples in the worst case.
By default the Terra reference
Statevectoris used, however, the type can be specified via the
Shot noise emulation can also be added. If
None, the exact fidelity is used. Otherwise, the mean is taken of samples drawn from a binomial distribution with probability equal to the exact fidelity.
With the addition of shot noise, the kernel matrix may no longer be positive semi-definite (PSD). With
Truethis condition is enforced.
An example of using this class is as follows:
from sklearn.datasets import make_blobs from sklearn.svm import SVC from qiskit.circuit.library import ZZFeatureMap from qiskit.quantum_info import Statevector from qiskit_machine_learning.kernels import FidelityStatevectorKernel # generate a simple dataset features, labels = make_blobs( n_samples=20, centers=2, center_box=(-1, 1), cluster_std=0.1 ) feature_map = ZZFeatureMap(feature_dimension=2, reps=2) statevector_type = Statevector kernel = FidelityStatevectorKernel( feature_map=feature_map, statevector_type=Statevector, cache_size=len(labels), auto_clear_cache=True, shots=1000, enforce_psd=True, ) svc = SVC(kernel=kernel.evaluate) svc.fit(features, labels)
The PyTorch connector
TorchConnectornow fully supports sparse output in both forward and backward passes. To enable sparse support, first of all, the underlying quantum neural network must be sparse. In this case, if the sparse property of the connector itself is not set, then the connector inherits sparsity from the networks. If the connector is set to be sparse, but the network is not, an exception will be raised. Also you may set the connector to be dense if the network is sparse.
This snippet illustrates how to create a sparse instance of the connector.
import torch from qiskit import QuantumCircuit from qiskit.circuit.library import ZFeatureMap, RealAmplitudes from qiskit_machine_learning.connectors import TorchConnector from qiskit_machine_learning.neural_networks import SamplerQNN num_qubits = 2 fmap = ZFeatureMap(num_qubits, reps=1) ansatz = RealAmplitudes(num_qubits, reps=1) qc = QuantumCircuit(num_qubits) qc.compose(fmap, inplace=True) qc.compose(ansatz, inplace=True) qnn = SamplerQNN( circuit=qc, input_params=fmap.parameters, weight_params=ansatz.parameters, sparse=True, ) connector = TorchConnector(qnn) output = connector(torch.tensor([[1., 2.]])) print(output) loss = torch.sparse.sum(output) loss.backward() grad = connector.weight.grad print(grad)
In hybrid setup, where a PyTorch-based neural network has classical and quantum layers, sparse operations should not be mixed with dense ones, otherwise exceptions may be thrown by PyTorch.
Sparse support works on python 3.8+.
The previously deprecated
CrossEntropySigmoidLossloss function has been removed.
The previously deprecated datasets have been removed:
SamplerQNNcan now correctly handle quantum circuits without both input parameters and weights. If such a circuit is passed to the QNN then this circuit executed once in the forward pass and backward returns
Nonefor both gradients.
Added support for categorical and ordinal labels to
VQC. Now labels can be passed in different formats, they can be plain ordinal labels, a one dimensional array that contains integer labels like 0, 1, 2, …, or an array with categorical string labels. One-hot encoded labels are still supported. Internally, labels are transformed to one hot encoding and the classifier is always trained on one hot labels.
Introduced Estimator Quantum Neural Network (
EstimatorQNN) based on (runtime) primitives. This implementation leverages the estimator primitive (see
BaseEstimator) and the estimator gradients (see
BaseEstimatorGradient) to enable runtime access and more efficient computation of forward and backward passes.
EstimatorQNNexposes a similar interface to the Opflow QNN, with a few differences. One is the quantum_instance parameter. This parameter does not have a direct replacement, and instead the estimator parameter must be used. The gradient parameter keeps the same name as in the Opflow QNN implementation, but it no longer accepts Opflow gradient classes as inputs; instead, this parameter expects an (optionally custom) primitive gradient.
For example a VQR using EstimatorQNN can be trained as follows:
import numpy as np from qiskit.algorithms.optimizers import L_BFGS_B from qiskit.circuit import QuantumCircuit, Parameter from qiskit.primitives import Estimator from qiskit_machine_learning.algorithms import VQR num_samples = 20 eps = 0.2 lb, ub = -np.pi, np.pi X = (ub - lb) * np.random.rand(num_samples, 1) + lb Y = np.sin(X[:, 0]) + eps * (2 * np.random.rand(num_samples) - 1) params = [Parameter("θ_0"), Parameter("θ_1")] feature_map = QuantumCircuit(1, name="fm") feature_map.ry(params, 0) ansatz = QuantumCircuit(1, name="vf") ansatz.ry(params, 0) vqr = VQR( feature_map=feature_map, ansatz=ansatz, optimizer=L_BFGS_B(maxiter=5), initial_point=np.array(), estimator=Estimator() ) vqr.fit(X, Y)
Introduced Quantum Kernels based on (runtime) primitives. This implementation leverages the fidelity primitive (see
BaseStateFidelity) and provides more flexibility to end users. The fidelity primitive calculates state fidelities/overlaps for pairs of quantum circuits and requires an instance of
Sampler. Thus, users may plug in their own implementations of fidelity calculations.
The new kernels expose the same interface and the same parameters except the quantum_instance parameter. This parameter does not have a direct replacement and instead the fidelity parameter must be used.
A new hierarchy is introduced:
A base and abstract class
BaseKernelis introduced. All concrete implementation must inherit this class.
A fidelity based quantum kernel
FidelityQuantumKernelis added. This is a direct replacement of
QuantumKernel. The difference is that the new class takes either a sampler or a fidelity instance to estimate overlaps and construct kernel matrix.
A new abstract class
TrainableKernelis introduced to generalize ability to train quantum kernels.
A fidelity-based trainable quantum kernel
TrainableFidelityQuantumKernelis introduced. This is a replacement of the existing
QuantumKernelif a trainable kernel is required. The trainer
QuantumKernelTrainernow accepts both quantum kernel implementations, the new one and the existing one.
For example a QSVM classifier can be trained as follows:
from qiskit.algorithms.state_fidelities import ComputeUncompute from qiskit.circuit.library import ZZFeatureMap from qiskit.primitives import Sampler from sklearn.datasets import make_blobs from qiskit_machine_learning.algorithms import QSVC from qiskit_machine_learning.kernels import FidelityQuantumKernel # generate a simple dataset features, labels = make_blobs(n_samples=20, centers=2, center_box=(-1, 1), cluster_std=0.1) # fidelity is optional and quantum kernel will create it automatically if none is passed fidelity = ComputeUncompute(sampler=Sampler()) feature_map = ZZFeatureMap(2) kernel = FidelityQuantumKernel(feature_map=feature_map, fidelity=fidelity) qsvc = QSVC(quantum_kernel=kernel) qsvc.fit(features, labels)
Introduced Sampler Quantum Neural Network (
SamplerQNN) based on (runtime) primitives. This implementation leverages the sampler primitive (see
BaseSampler) and the sampler gradients (see
BaseSamplerGradient) to enable runtime access and more efficient computation of forward and backward passes more efficiently.
SamplerQNNexposes a similar interface to the
CircuitQNN, with a few differences. One is the quantum_instance parameter. This parameter does not have a direct replacement, and instead the sampler parameter must be used. The gradient parameter keeps the same name as in the
CircuitQNNimplementation, but it no longer accepts Opflow gradient classes as inputs; instead, this parameter expects an (optionally custom) primitive gradient. The sampling option has been removed for the time being, as this information is not currently exposed by the Sampler, and might correspond to future lower-level primitives.
from qiskit.circuit.library import ZZFeatureMap, RealAmplitudes from qiskit.algorithms.optimizers import COBYLA from qiskit.primitives import Sampler from sklearn.datasets import make_blobs from qiskit_machine_learning.algorithms import VQC # generate a simple dataset num_inputs = 20 features, labels = make_blobs(n_samples=num_inputs, centers=2, center_box=(-1, 1), cluster_std=0.1) # construct feature map feature_map = ZZFeatureMap(num_inputs) # construct ansatz ansatz = RealAmplitudes(num_inputs, reps=1) # construct variational quantum classifier vqc = VQC( sampler=sampler, feature_map=feature_map, ansatz=ansatz, loss="cross_entropy", optimizer=COBYLA(maxiter=30), ) # fit classifier to data vqc.fit(features, labels)
Expose the callback attribute as public property on
TrainableModel. This, for instance, allows setting the callback between optimizations and store the history in separate objects.
Gradient operator/circuit initialization in
CircuitQNNrespectively is now delayed until the first call of the
backwardmethod. Thus, the networks are created faster and gradient framework objects are not created until they are required.
Introduced a new parameter evaluate_duplicates in
QuantumKernel. This parameter defines a strategy how kernel matrix elements are evaluated if duplicate samples are found. Possible values are:
allmeans that all kernel matrix elements are evaluated, even the diagonal ones when
training. This may introduce additional noise in the matrix.
off_diagonalwhen training the matrix diagonal is set to 1, the rest elements are
fully evaluated, e.g., for two identical samples in the dataset. When inferring, all elements are evaluated. This is the default value.
nonewhen training the diagonal is set to 1 and if two identical samples are found
in the dataset the corresponding matrix element is set to 1. When inferring, matrix elements for identical samples are set to 1.
In the previous releases, in the
QGANclass, the gradient penalty could not be enabled to train the discriminator with a penalty function. Thus, a gradient penalty parameter was added during the initialization of the QGAN algorithm. This parameter indicates whether or not penalty function is applied to the loss function of the discriminator during training.
Enable the default construction of the
TwoLayerQNNif the number of qubits is 1. Previously, not providing a feature map for the single qubit case raised an error as default construction assumed 2 or more qubits.
VQCwill now raise an error when training from a warm start when encountering targets with a different number of classes to the previous dataset.
VQCwill now raise an error when a user attempts multi-label classification, which is not supported.
- Added two new properties to the
- Added two new properties to the
fit()is not abstract any more. Now, it implements basic checks, calls a new abstract method
_fit_internal()to be implemented by sub-classes, and keeps track of
fit_resultproperty that is returned by this new abstract method. Thus, any sub-class of
TrainableModelmust implement this new method. Classes
NeuralNetworkRegressorhave been updated correspondingly.
Inheriting from sklearn.svm.SVC in PegasosQSVC resulted in errors when calling some inherited methods such as decision_function due to the overridden fit implementation. For that reason, the inheritance has been replaced by a much lighter inheritance from ClassifierMixin providing the score method and a new method decision_function has been implemented. The class is still sklearn compatible due to duck typing. This means that for the user everything that has been working in the previous release still works, except the inheritance. The only methods that are no longer supported (such as predict_proba) were only raising errors in the previous release in practice.
The qiskit_machine_learning.algorithms.distribution_learners package is deprecated and will be removed no sooner than 3 months after the release. There’s no direct replacement for the classes from this package. Instead, please refer to the new QGAN tutorial. This tutorial introduces step-by-step how to build a PyTorch-based QGAN using quantum neural networks.
qiskit_machine_learning.runtime.obj_to_str()are being deprecated. You should use QiskitRuntimeService to leverage primitives and runtimes.
qiskit_machine_learning.neural_networks.CircuitQNNis pending deprecation and is superseded by
qiskit_machine_learning.neural_networks.OpflowQNNis pending deprecation and is superseded by
qiskit_machine_learning.neural_networks.TwoLayerQNNis pending deprecation and has no direct replacement. Please make use of
These classes will be deprecated in a future release and subsequently removed after that.
qiskit_machine_learning.kernels.QuantumKernelis pending deprecation and is superseded by
This class will be deprecated in a future release and subsequently removed after that.
For the class
QuantumKernel, to improve usability and better describe the usage,
user_parametershas been renamed to
training_parameters; current behavior is retained. For this change the constructor parameter
user_parametersis now deprecated and replaced by
training_parameters. The related properties and methods are renamed to match. That is to say:
Previously in the
QGANalgorithm, if we used a simulator other than the statevector_simulator the result dictionary had not the correct size to compute both the gradient and the loss functions. Now, the values output are stored in a vector of size 2^n and each key is mapped to its value from the result dictionary in the new value array. Also, each key is stored in a vector of size 2^n where each element of the vector keys[i] corresponds to the binary representation of i.
Previously in the
QGANalgorithm, the gradients were computed using the statevector backend even if we specified another backend. To solve this issue, the gradient object is converted into a CircuitStateFn instead of its adjoint as in the previous version. The gradients are converted into the backend-dependent structure using CircuitSampler. After the evaluation of the object, the gradient_function is stored in a dense array to fix a dimension incompatibility when computing the loss function.
Fixed quantum kernel evaluation when duplicate samples are found in the dataset. Originally, kernel matrix elements were not evaluated for identical samples in the dataset and such elements were set wrongly to zero. Now we introduced a new parameter evaluate_duplicates that ensures that elements of the kernel matrix are evaluated correctly. See the feature section for more details.
Previously in the
pytorch_discriminatorclass of the
QGANalgorithm, if the gradient penalty parameter was enabled, the latent variable z was not properly initialized : Variable module was used instead of torch.autograd.Variable.
Calling PegasosQSVC.decision_function() raises an error. Fixed by writing own method instead of inheriting from SVC. The inheritance from SVC in the PegasosQSVC class is removed. To keep the score method, inheritance to the mixin class ClassifierMixin from scikit-learn is added.
In the previous releases at the backpropagation stage of
OpflowQNNgradients were computed for each sample in a dataset individually and then the obtained values were aggregated into one output array. Thus, for each sample in a dataset at least one job was submitted. Now, gradients are computed for all samples in a dataset in one go by passing a list of values for a single parameter to
CircuitSampler. Therefore, a number of jobs required for such computations is significantly reduced. This improvement may speed up training process in the cloud environment, where queue time for submitting a job may be a major contribution in the overall training time.
Introduced two new classes,
LocalEffectiveDimension, for calculating the capacity of quantum neural network models through the computation of the Fisher Information Matrix. The local effective dimension bounds the generalization error of QNNs and only accepts single parameter sets as inputs. The global effective dimension (or just effective dimension) can be used as a measure of the expressibility of the model, and accepts multiple parameter sets.
Objective functions constructed by the neural network classifiers and regressors now include an averaging factor that is evaluated as
1 / number_of_samples. Computed averaged objective values are passed to a user specified callback if any. Users may notice a dramatic decrease in the objective values in their callbacks. This is due to this averaging factor.
Added support for saving and loading machine learning models. This support is introduced in
TrainableModel, so all sub-classes can be saved and loaded. Also, kernel based models can be saved and loaded. A list of models that support saving and loading models:
When model is saved all model parameters are saved to a file, including a quantum instance that is referenced by internal objects. That means if a model is loaded from a file and is used, for instance, for inference, the same quantum instance and a corresponding backend will be used even if a cloud backend was used.
Added a new feature in
CircuitQNNthat ensures unbound_pass_manager is called when caching the QNN circuit and that
bound_pass_manageris called when QNN parameters are assigned.
Added a new feature in
QuantumKernelthat ensures the bound_pass_manager is used, when provided via the
QuantumInstance, when transpiling the kernel circuits.
Added support for running with Python 3.10. At the the time of the release, Torch didn’t have a python 3.10 version.
The previously deprecated
BaseBackendclass has been removed. It was originally deprecated in the Qiskit Terra 0.18.0 release.
Support for running with Python 3.6 has been removed. To run Machine Learning you need a minimum Python version of 3.7.
The functions breast_cancer, digits, gaussian, iris and wine in the datasets module are deprecated and should not be used.
CrossEntropySigmoidLossis deprecated and marked for removal.
Removed support of
l1values as loss function definitions. Please, use
Fixes in Ad Hoc dataset. Fixed an
n=3is passed to
ad_hoc_data. When the value of
nis not 2 or 3, a
ValueErroris raised with a message that the only supported values of
nare 2 and 3.
Previously, VQC would throw an error if trained on batches of data where not all of the target labels that can be found in the full dataset were present. This is because VQC interpreted the number of unique targets in the current batch as the number of classes. Currently, VQC is hard-coded to expect one-hot-encoded targets. Therefore, VQC will now determine the number of classes from the shape of the target array.
Fixes an issue where VQC could not be trained on multiclass datasets. It returned nan values on some iterations. This is fixed in 2 ways. First, the default parity function is now guaranteed to be able to assign at least one output bitstring to each class, so long as 2**N >= C where N is the number of output qubits and C is the number of classes. This guarantees that it is at least possible for every class to be predicted with a non-zero probability. Second, even with this change it is still possible that on a given training instance a class is predicted with 0 probability. Previously this could lead to nan in the CrossEntropyLoss calculation. We now replace 0 probabilities with a small positive value to ensure the loss cannot return nan.
Fixes an issue in
QuantumKernelwhere evaluating a quantum kernel for data with dimension d>2 raised an error. This is fixed by changing the hard-coded reshaping of one-dimensional arrays in
Fixes an issue where
VQCwould fail with
warm_start=True. The extraction of the
TrainableModelfrom the final point of the minimization had not been updated to reflect the refactor of optimizers in
qiskit-terra; the old
optimizemethod, that returned a tuple was deprecated and new method
minimizewas created that returns an
OptimizerResultobject. We now correctly recover the final point of the minimization from previous fits to use for a warm start in subsequent fits.
Added GPU support to
TorchConnector. Now, if a hybrid PyTorch model is being trained on GPU,
TorchConnectorcorrectly detaches tensors, moves them to CPU, evaluate forward and backward passes and places resulting tensors to the same device they came from.
Fixed a bug when a sparse array is passed to
VQCas labels. Sparse arrays can be easily observed when labels are encoded via
SciKit-Learn. Now both
VQCsupport sparse arrays and convert them dense arrays in the implementation.
Addition of a
QuantumKernelTrainerobject which may be used by kernel-based machine learning algorithms to perform optimization of some
QuantumKernelparameters before training the model. Addition of a new base class,
KernelLoss, in the
loss_functionspackage. Addition of a new
TrainableModel, and its sub-classes
VQC, have a new optional argument
callback. User can optionally provide a callback function that can access the intermediate training data to track the optimization process, else it defaults to
None. The callback function takes in two parameters: the weights for the objective function and the computed objective value. For each iteration an optimizer invokes the callback and passes current weights and computed value of the objective function.
Classification models (i.e. models that extend the
NeuralNetworkClassifierclass like VQC) can now handle categorical target data in methods like
score(). Categorical data is inferred from the presence of string type data and is automatically encoded using either one-hot or integer encodings. Encoder type is determined by the
one_hotargument supplied when instantiating the model.
There’s an additional transpilation step introduced in CircuitQNN that is invoked when a quantum instance is set. A circuit passed to
CircuitQNNis transpiled and saved for subsequent usages. So, every time when the circuit is executed it is already transpiled and overall time of the forward pass is reduced. Due to implementation limitations of
RawFeatureVectorit can’t be transpiled in advance, so it is transpiled every time it is required to be executed and only when all parameters are bound. This means overall performance when
RawFeatureVectoris used stays the same.
Introduced a new classification algorithm, which is an alternative version of the Quantum Support Vector Classifier (QSVC) that is trained via the Pegasos algorithm from https://home.ttic.edu/~nati/Publications/PegasosMPB.pdf instead of the dual optimization problem like in sklearn. This algorithm yields a training complexity that is independent of the size of the training set (see the to be published Master’s Thesis “Comparing Quantum Neural Networks and Quantum Support Vector Machines” by Arne Thomsen), such that the PegasosQSVC is expected to train faster than QSVC for sufficiently large training sets.
QuantumKerneltranspiles all circuits before execution. However, this
information was not being passed, which calls the transpiler many times during the execution of the
had_transpiled=Trueis passed correctly and the algorithm runs faster.
QuantumKernelnow provides an interface for users to specify a new class field,
user_parameters. User parameters are an array of
Parameterobjects corresponding to parameterized quantum gates in the feature map circuit the user wishes to tune. This is useful in algorithms where feature map parameters must be bound and re-bound many times (i.e. variational algorithms). Users may also use a new function
assign_user_parametersto assign real values to some or all of the user parameters in the feature map.
TorchRuntimeClientfor training a quantum model or a hybrid quantum-classical model faster using Qiskit Runtime. It can also be used for predicting the result using the trained model or calculating the score of the trained model faster using Qiskit Runtime.
If positional arguments are passed into QSVR or QSVC and these classes are printed, an exception is raised.
Positional arguments in QSVR and QSVC are deprecated.
Fixed a bug in
QuantumKernelwhere for statevector simulator all circuits were constructed and transpiled at once, leading to high memory usage. Now the circuits are batched similarly to how it was previously done for non-statevector simulators (same flag is used for both now; previously
batch_sizewas silently ignored by statevector simulator)
Fix a bug where
TorchConnectorfailed on backward pass computation due to empty parameters for inputs or weights. Validation added to
TwoLayerQNNnow passes the value of the
exp_valparameter in the constructor to the constructor of
In some configurations forward pass of a neural network may return the same value across multiple calls even if different weights are passed. This behavior is confirmed with
AQGDoptimizer. This was due to a bug in the implementation of the objective functions. They cache a value obtained at the forward pass to be re-used in the backward pass. Initially, this cache was based on an identifier (a call of id() function) of the weights array. AQGD re-uses the same array for weights: it updates the values keeping an instance of the array the same. This caused to re-use the same forward pass value across all iteration. Now the forward pass cache is based on actual values of weights instead of identifiers.
Fix a bug, where
qiskit_machine_learning.circuit.library.RawFeatureVector.copy()didn’t copy all internal settings which could lead to issues with the copied circuit. As a consequence
qiskit_machine_learning.circuit.library.RawFeatureVector.bind_parameters()is also fixed.
Fixes a bug where
VQCcould not be instantiated unless either
ansatzwere provided (#217).
VQCis now instantiated with the default
The QNN weight parameter in TorchConnector is now registered in the torch DAG as
weight, instead of
_weights. This is consistent with the PyTorch naming convention and the
weightproperty used to get access to the computed weights.
A base class
TrainableModelis introduced for machine learning models. This class follows Scikit-Learn principles and makes the quantum machine learning compatible with classical models. Both
NeuralNetworkRegressorextend this class. A base class
ObjectiveFunctionis introduced for objective functions optimized by machine learning models. There are three objective functions introduced that are used by ML models:
OneHotObjectiveFunction. These functions are used internally by the models.
optimizerargument for the classes
NeuralNetworkRegressor, both of which extends the
TrainableModelclass, is made optional with the default value being
SLSQP(). The same is true for the classes
VQRas they inherit from
The constructor of
NeuralNetwork, and all classes that inherit from it, has a new parameter
input_gradientswhich defaults to False. Previously this parameter could only be set using the setter method. Note that
NeuralNetworkit was instantiated with to
True. This is not longer the case. So if you use
TorchConnectorand want to compute the gradients w.r.t. the input, make sure you set
NeuralNetworkbefore passing it to
Added a parameter
initial_pointto the neural network classifiers and regressors. This an array that is passed to an optimizer as an initial point to start from.
Computation of gradients with respect to input data in the backward method of
NeuralNetworkis now optional. By default gradients are not computed. They may inspected and turned on, if required, by getting or setting a new property
RegressorMixinfrom Scikit-Learn and rely on their methods for score calculation. This also adds an ability to pass sample weights as an optional parameter to the score methods.
The valid values passed to the loss argument of the
TrainableModelconstructor were partially deprecated (i.e.
loss='l1'is replaced with
loss='l2'is replaced with
loss='squared_error'). This affects instantiation of classes like the
NeuralNetworkClassifier. This change was made to reduce confusion that stems from using lowercase ‘l’ character which can be mistaken for a numeric ‘1’ or capital ‘I’. You should update your model instantiations by replacing ‘l1’ with ‘absolute_error’ and ‘l2’ with ‘squared_error’.
TorchConnectoris deprecated in favor of the
weightproperty which is PyTorch compatible. By default, PyTorch layers expose
weightproperties to get access to the computed weights.
This fixes the exception that occurs when no
optimizerargument is passed to
Fixes the computation of gradients in TorchConnector when a batch of input samples is provided.
TorchConnector now returns the correct input gradient dimensions during the backward pass in hybrid nn training.
Added a dedicated handling of
ComposedOpas a operator in
OpflowQNN. In this case output shape is determined from the first operator in the
Fix the dimensions of the gradient in the quantum generator for the qGAN training.