Gradient descent (fixed learning rate)

class pints.GradientDescent(x0, sigma0=0.1, boundaries=None)[source]

Gradient-descent method with a fixed learning rate.

The initial learning rate is set as min(sigma0), but this can be changed at any time with set_learning_rate().

This is an unbounded method: Any boundaries will be ignored.

ask()[source]

See Optimiser.ask().

f_best()[source]

See Optimiser.f_best().

f_guessed()

For optimisers in which the best guess of the optimum (see x_guessed()) differs from the best-seen point (see x_best()), this method returns an estimate of the objective function value at x_guessed.

Notes:

  1. For many optimisers the best guess is simply the best point seen during the optimisation, so that this method is equivalent to f_best().

  2. Because x_guessed is not required to be a point that the optimiser has visited, the value f(x_guessed) may be unkown. In these cases, an approximation of f(x_guessed) may be returned.

fbest()

Deprecated alias of f_best().

learning_rate()[source]

Returns this optimiser’s learning rate.

n_hyper_parameters()[source]

See pints.TunableMethod.n_hyper_parameters().

name()[source]

See Optimiser.name().

needs_sensitivities()[source]

See Optimiser.needs_sensitivities().

running()[source]

See Optimiser.running().

set_hyper_parameters(x)[source]

See pints.TunableMethod.set_hyper_parameters().

The hyper-parameter vector is [learning_rate].

set_learning_rate(eta)[source]

Sets the learning rate for this optimiser.

Parameters:

eta (float) – The learning rate, as a float greater than zero.

stop()

Checks if this method has run into trouble and should terminate. Returns False if everything’s fine, or a short message (e.g. “Ill-conditioned matrix.”) if the method should terminate.

tell(reply)[source]

See Optimiser.tell().

x_best()[source]

See Optimiser.x_best().

x_guessed()

Returns the optimiser’s current best estimate of where the optimum is.

For many optimisers, this will simply be the point for which the minimal error or maximum likelihood was observed, so that x_guessed = x_best. However, optimisers like pints.CMAES and its derivatives, maintain a separate “best guess” value that does not necessarily correspond to any of the points evaluated during the optimisation.

xbest()

Deprecated alias of x_best().