Gradient descent (fixed learning rate)¶
- class pints.GradientDescent(x0, sigma0=0.1, boundaries=None)[source]¶
Gradient-descent method with a fixed learning rate.
The initial learning rate is set as
min(sigma0)
, but this can be changed at any time withset_learning_rate()
.This is an unbounded method: Any
boundaries
will be ignored.- ask()[source]¶
See
Optimiser.ask()
.
- f_best()[source]¶
See
Optimiser.f_best()
.
- f_guessed()¶
For optimisers in which the best guess of the optimum (see
x_guessed()
) differs from the best-seen point (seex_best()
), this method returns an estimate of the objective function value atx_guessed
.Notes:
For many optimisers the best guess is simply the best point seen during the optimisation, so that this method is equivalent to
f_best()
.Because
x_guessed
is not required to be a point that the optimiser has visited, the valuef(x_guessed)
may be unkown. In these cases, an approximation off(x_guessed)
may be returned.
- name()[source]¶
See
Optimiser.name()
.
- running()[source]¶
See
Optimiser.running()
.
- set_hyper_parameters(x)[source]¶
See
pints.TunableMethod.set_hyper_parameters()
.The hyper-parameter vector is
[learning_rate]
.
- set_learning_rate(eta)[source]¶
Sets the learning rate for this optimiser.
- Parameters:
eta (float) – The learning rate, as a float greater than zero.
- stop()¶
Checks if this method has run into trouble and should terminate. Returns
False
if everything’s fine, or a short message (e.g. “Ill-conditioned matrix.”) if the method should terminate.
- tell(reply)[source]¶
See
Optimiser.tell()
.
- x_best()[source]¶
See
Optimiser.x_best()
.
- x_guessed()¶
Returns the optimiser’s current best estimate of where the optimum is.
For many optimisers, this will simply be the point for which the minimal error or maximum likelihood was observed, so that
x_guessed = x_best
. However, optimisers likepints.CMAES
and its derivatives, maintain a separate “best guess” value that does not necessarily correspond to any of the points evaluated during the optimisation.