PSO¶
- class pints.PSO(x0, sigma0=None, boundaries=None)[source]¶
Finds the best parameters using the PSO method described in [1].
Particle Swarm Optimisation (PSO) is a global search method (so refinement with a local optimiser is advised!) that works well for problems in high dimensions and with many local minima. Because it treats each parameter independently, it does not require preconditioning of the search space.
In a particle swarm optimization, the parameter space is explored by
n
independent particles. The particles perform a pseudo-random walk through the parameter space, guided by their own personal best score and the global optimum found so far.The method starts by creating a swarm of
n
particles and assigning each an initial position and initial velocity (see the explanation of the argumentshints
andv
for details). Each particle’s score is calculated and set as the particle’s current best local scorepl
. The best score of all the particles is set as the best global scorepg
.Next, an iterative procedure is run that updates each particle’s velocity
v
and positionx
using:v[k] = v[k-1] + al * (pl - x[k-1]) + ag * (pg - x[k-1]) x[k] = v[k]
Here,
x[t]
is the particle’s current position andv[t]
its current velocity. The valuesal
andag
are scalars randomly sampled from a uniform distribution, with values bound byr * 4.1
and(1 - r) * 4.1
. Thus a swarm withr = 1
will only use local information, while a swarm withr = 0
will only use global information. The de facto standard isr = 0.5
. The random sampling is done each timeal
andag
are used: at each time step every particle performsm
samplings, wherem
is the dimensionality of the search space.Pseudo-code algorithm:
almax = r * 4.1 agmax = 4.1 - almax while stopping criterion not met: for i in [1, 2, .., n]: if f(x[i]) < f(p[i]): p[i] = x[i] pg = min(p[1], p[2], .., p[n]) for j in [1, 2, .., m]: al = uniform(0, almax) ag = uniform(0, agmax) v[i,j] += al * (p[i,j] - x[i,j]) + ag * (pg[i,j] - x[i,j]) x[i,j] += v[i,j]
Extends
PopulationBasedOptimiser
.References
- ask()[source]¶
See
Optimiser.ask()
.
- f_best()[source]¶
See
Optimiser.f_best()
.
- f_guessed()¶
For optimisers in which the best guess of the optimum (see
x_guessed()
) differs from the best-seen point (seex_best()
), this method returns an estimate of the objective function value atx_guessed
.Notes:
For many optimisers the best guess is simply the best point seen during the optimisation, so that this method is equivalent to
f_best()
.Because
x_guessed
is not required to be a point that the optimiser has visited, the valuef(x_guessed)
may be unkown. In these cases, an approximation off(x_guessed)
may be returned.
- name()[source]¶
See
Optimiser.name()
.
- needs_sensitivities()¶
Returns
True
if this methods needs sensitivities to be passed in totell
along with the evaluated error.
- population_size()¶
Returns this optimiser’s population size.
If no explicit population size has been set,
None
may be returned. Once running, the correct value will always be returned.
- running()[source]¶
See
Optimiser.running()
.
- set_hyper_parameters(x)[source]¶
The hyper-parameter vector is
[population_size, local_global_balance]
.
- set_local_global_balance(r=0.5)[source]¶
Set the balance between local and global exploration for each particle, using a parameter
r
such thatr = 1
is a fully local search andr = 0
is a fully global search.
- set_population_size(population_size=None)¶
Sets a population size to use in this optimisation.
If population_size is set to
None
, the population size will be set using the heuristicsuggested_population_size()
.
- stop()¶
Checks if this method has run into trouble and should terminate. Returns
False
if everything’s fine, or a short message (e.g. “Ill-conditioned matrix.”) if the method should terminate.
- suggested_population_size(round_up_to_multiple_of=None)¶
Returns a suggested population size for this method, based on the dimension of the search space (e.g. the parameter space).
If the optional argument
round_up_to_multiple_of
is set to an integer greater than 1, the method will round up the estimate to a multiple of that number. This can be useful to obtain a population size based on e.g. the number of worker processes used to perform objective function evaluations.
- tell(fx)[source]¶
See
Optimiser.tell()
.
- x_best()[source]¶
See
Optimiser.x_best()
.
- x_guessed()¶
Returns the optimiser’s current best estimate of where the optimum is.
For many optimisers, this will simply be the point for which the minimal error or maximum likelihood was observed, so that
x_guessed = x_best
. However, optimisers likepints.CMAES
and its derivatives, maintain a separate “best guess” value that does not necessarily correspond to any of the points evaluated during the optimisation.