GradientDescentState#

class GradientDescentState(x, fun, jac, nfev, njev, nit, stepsize, learning_rate)[source]#

Bases: OptimizerState

State of GradientDescent.

Dataclass with all the information of an optimizer plus the learning_rate and the stepsize.

Attributes

stepsize: float | None#

Norm of the gradient on the last step.

learning_rate: LearningRate#

Learning rate at the current step of the optimization process.

It behaves like a generator, (use next(learning_rate) to get the learning rate for the next step) but it can also return the current learning rate with learning_rate.current.

x: POINT#

Current optimization parameters.

fun: Callable[[POINT], float] | None#

Function being optimized.

jac: Callable[[POINT], POINT] | None#

Jacobian of the function being optimized.

nfev: int | None#

Number of function evaluations so far in the optimization.

njev: int | None#

Number of jacobian evaluations so far in the optimization.

nit: int | None#

Number of optimization steps performed so far in the optimization.