Hamiltonian MCMC

class pints.HamiltonianMCMC(x0, sigma0=None)[source]

Implements Hamiltonian Monte Carlo as described in [1].

Uses a physical analogy of a particle moving across a landscape under Hamiltonian dynamics to aid efficient exploration of parameter space. Introduces an auxilary variable – the momentum (p_i) of a particle moving in dimension i of negative log posterior space – which supplements the position (q_i) of the particle in parameter space. The particle’s motion is dictated by solutions to Hamilton’s equations,

\[\begin{split}dq_i/dt &= \partial H/\partial p_i\\ dp_i/dt &= - \partial H/\partial q_i.\end{split}\]

The Hamiltonian is given by,

\[\begin{split}H(q,p) &= U(q) + KE(p)\\ &= -log(p(q|X)p(q)) + \Sigma_{i=1}^{d} p_i^2/2m_i,\end{split}\]

where d is the dimensionality of model and m_i is the ‘mass’ given to each particle (often chosen to be 1 as default).

To numerically integrate Hamilton’s equations, it is essential to use a sympletic discretisation routine, of which the most typical approach is the leapfrog method,

\[\begin{split}p_i(t + \epsilon/2) &= p_i(t) - (\epsilon/2) d U(q_i(t))/dq_i\\ q_i(t + \epsilon) &= q_i(t) + \epsilon p_i(t + \epsilon/2) / m_i\\ p_i(t + \epsilon) &= p_i(t + \epsilon/2) - (\epsilon/2) d U(q_i(t + \epsilon))/dq_i\end{split}\]

In particular, the algorithm we implement follows eqs. (4.14)-(4.16) in [1], since we allow different epsilon according to dimension.

Extends SingleChainMCMC.

References

[1](1, 2) “MCMC using Hamiltonian dynamics”. Radford M. Neal, Chapter 5 of the Handbook of Markov Chain Monte Carlo by Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng.
ask()[source]

See SingleChainMCMC.ask().

divergent_iterations()[source]

Returns the iteration number of any divergent iterations

epsilon()[source]

Returns epsilon used in leapfrog algorithm

hamiltonian_threshold()[source]

Returns threshold difference in Hamiltonian value from one iteration to next which determines whether an iteration is divergent.

in_initial_phase()

For methods that need an initial phase (see needs_initial_phase()), this method returns True if the method is currently configured to be in its initial phase. For other methods a NotImplementedError is returned.

leapfrog_step_size()[source]

Returns the step size for the leapfrog algorithm.

leapfrog_steps()[source]

Returns the number of leapfrog steps to carry out for each iteration.

n_hyper_parameters()[source]

See TunableMethod.n_hyper_parameters().

name()[source]

See pints.MCMCSampler.name().

needs_initial_phase()

Returns True if this method needs an initial phase, for example an adaptation-free period for adaptive covariance methods, or a warm-up phase for DREAM.

needs_sensitivities()[source]

See pints.MCMCSampler.needs_sensitivities().

replace(current, current_log_pdf, proposed=None)

Replaces the internal current position, current LogPDF, and proposed point (if any) by the user-specified values.

This method can only be used once the initial position and LogPDF have been set (so after at least 1 round of ask-and-tell).

This is an optional method, and some samplers may not support it.

scaled_epsilon()[source]

Returns scaled epsilon used in leapfrog algorithm

set_epsilon(epsilon)[source]

Sets epsilon for the leapfrog algorithm

set_hamiltonian_threshold(hamiltonian_threshold)[source]

Sets threshold difference in Hamiltonian value from one iteration to next which determines whether an iteration is divergent.

set_hyper_parameters(x)[source]

The hyper-parameter vector is [leapfrog_steps, leapfrog_step_size].

See TunableMethod.set_hyper_parameters().

set_initial_phase(in_initial_phase)

For methods that need an initial phase (see needs_initial_phase()), this method toggles the initial phase algorithm. For other methods a NotImplementedError is returned.

set_leapfrog_step_size(step_size)[source]

Sets the step size for the leapfrog algorithm.

set_leapfrog_steps(steps)[source]

Sets the number of leapfrog steps to carry out for each iteration.

tell(reply)[source]

See pints.SingleChainMCMC.tell().