Ngôn ngữ

Release Notes


Bug Fixes

  • In some configurations forward pass of a neural network may return the same value across multiple calls even if different weights are passed. This behavior is confirmed with AQGD optimizer. This was due to a bug in the implementation of the objective functions. They cache a value obtained at the forward pass to be re-used in the backward pass. Initially, this cache was based on an identifier (a call of id() function) of the weights array. AQGD re-uses the same array for weights: it updates the values keeping an instance of the array the same. This caused to re-use the same forward pass value across all iteration. Now the forward pass cache is based on actual values of weights instead of identifiers.


New Features

  • The class TrainableModel, and its sub-classes NeuralNetworkClassifier, NeuralNetworkRegressor, VQR, VQC, have a new optional argument callback. User can optionally provide a callback function that can access the intermediate training data to track the optimization process, else it defaults to None. The callback function takes in two parameters: the weights for the objective function and the computed objective value. For each iteration an optimizer invokes the callback and passes current weights and computed value of the objective function.

  • Classification models (i.e. models that extend the NeuralNetworkClassifier class like VQC) can now handle categorical target data in methods like fit() and score(). Categorical data is inferred from the presence of string type data and is automatically encoded using either one-hot or integer encodings. Encoder type is determined by the one_hot argument supplied when instantiating the model.

Bug Fixes

  • Fix a bug, where qiskit_machine_learning.circuit.library.RawFeatureVector.copy() didn't copy all internal settings which could lead to issues with the copied circuit. As a consequence qiskit_machine_learning.circuit.library.RawFeatureVector.bind_parameters() is also fixed.

  • The QNN weight parameter in TorchConnector is now registered in the torch DAG as weight, instead of _weights. This is consistent with the PyTorch naming convention and the weight property used to get access to the computed weights.