AdvancedHMC.jl
is part of Turing.jl, a probabilistic programming library in Julia.
If you are interested in using AdvancedHMC.jl
through a probabilistic programming language, please check it out!
NEWS
- We will present AdvancedHMC.jl at AABI 2019 in Vancouver, Canada.
- We presented a poster for AdvancedHMC.jl at StanCon 2019 in Cambridge, UK. (pdf)
API CHANGES
- [v0.2.15]
n_adapts
is not needed to constructStanHMCAdaptor
; the old constructor is deprecated. - [v0.2.8] Two exported types are renamed:
Multinomial
->MultinomialTS
andSlice
->SliceTS
. - [v0.2.0] The gradient function passed to
Hamiltonian
is supposed to return a value-gradient tuple now.
### Define the target distribution and its gradient
using Distributions: logpdf, MvNormal
using DiffResults: GradientResult, value, gradient
using ForwardDiff: gradient!
D = 10
target = MvNormal(zeros(D), ones(D))
ℓπ(θ) = logpdf(target, θ)
function ∂ℓπ∂θ(θ)
res = GradientResult(θ)
gradient!(res, ℓπ, θ)
return (value(res), gradient(res))
end
### Build up a HMC sampler to draw samples
using AdvancedHMC
# Sampling parameter settings
n_samples, n_adapts = 12_000, 2_000
# Draw a random starting points
θ_init = rand(D)
# Define metric space, Hamiltonian, sampling method and adaptor
metric = DiagEuclideanMetric(D)
h = Hamiltonian(metric, ℓπ, ∂ℓπ∂θ)
int = Leapfrog(find_good_eps(h, θ_init))
prop = NUTS{MultinomialTS,GeneralisedNoUTurn}(int)
adaptor = StanHMCAdaptor(
Preconditioner(metric),
NesterovDualAveraging(0.8, int)
)
# Draw samples via simulating Hamiltonian dynamics
# - `samples` will store the samples
# - `stats` will store statistics for each sample
samples, stats = sample(h, prop, θ_init, n_samples, adaptor, n_adapts; progress=true)
An important design goal of AdvancedHMC.jl
is to be modular, and support algorithmic research on HMC.
This modularity means that different HMC variants can be easily constructed by composing various components, such as preconditioning metric (i.e. mass matrix), leapfrog integrators, trajectories (static or dynamic), and adaption schemes etc.
The minimal example above can be modified to suit particular inference problems by picking components from the list below.
- Unit metric:
UnitEuclideanMetric(dim)
- Diagonal metric:
DiagEuclideanMetric(dim)
- Dense metric:
DenseEuclideanMetric(dim)
where dim
is the dimensionality of the sampling space.
- Ordinary leapfrog integrator:
Leapfrog(ϵ)
- Jittered leapfrog integrator with jitter rate
n
:JitteredLeapfrog(ϵ, n)
- Tempered leapfrog integrator with tempering rate
a
:TemperedLeapfrog(ϵ, a)
where ϵ
is the step size of leapfrog integration.
- Static HMC with a fixed number of steps (
n_steps
):StaticTrajectory(int, n_steps)
- HMC with a fixed total trajectory length (
len_traj
):HMCDA(int, len_traj)
- Original NUTS with slice sampling:
NUTS{SliceTS,ClassicNoUTurn}(int)
- Generalised NUTS with slice sampling:
NUTS{SliceTS,GeneralisedNoUTurn}(int)
- Original NUTS with multinomial sampling:
NUTS{MultinomialTS,ClassicNoUTurn}(int)
- Generalised NUTS with multinomial sampling:
NUTS{MultinomialTS,GeneralisedNoUTurn}(int)
where int
is the integrator used.
- Preconditioning on metric space
metric
:pc = Preconditioner(metric)
- Nesterov's dual averaging with target acceptance rate
δ
on integratorint
:da = NesterovDualAveraging(δ, int)
- Combine the two above naively:
NaiveHMCAdaptor(pc, da)
- Combine the first two using Stan's windowed adaptation:
StanHMCAdaptor(pc, da)
All the combinations are tested in this file except from using tempered leapfrog integrator together with adaptation, which we found unstable empirically.
sample(
rng::AbstractRNG,
h::Hamiltonian,
τ::AbstractProposal,
θ::AbstractVector{<:AbstractFloat},
n_samples::Int,
adaptor::Adaptation.AbstractAdaptor=Adaptation.NoAdaptation(),
n_adapts::Int=min(div(n_samples, 10), 1_000);
drop_warmup::Bool=false,
verbose::Bool=true,
progress::Bool=false
)
Sample n_samples
samples using the proposal τ
under Hamiltonian h
- The randomness is controlled by
rng
.- If
rng
is not provided,GLOBAL_RNG
will be used.
- If
- The initial point is given by
θ
. - The adaptor is set by
adaptor
, for which the default is no adaptation.- It will perform
n_adapts
steps of adaptation, for which the default is the minimum of1_000
and 10% ofn_samples
.
- It will perform
drop_warmup
specifies whether to drop samples.verbose
controls the verbosity.progress
controls whether to show the progress meter or not.
If you use AdvancedHMC.j for your own research, please consider citing the following publication: Hong Ge, Kai Xu, and Zoubin Ghahramani: Turing: a language for flexible probabilistic inference. AISTATS 2018 pdf bibtex
-
Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11), 2. (arXiv)
-
Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434.
-
Girolami, M., & Calderhead, B. (2011). Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 123-214. (arXiv)
-
Betancourt, M. J., Byrne, S., & Girolami, M. (2014). Optimizing the integrator step size for Hamiltonian Monte Carlo. arXiv preprint arXiv:1411.6669.
-
Betancourt, M. (2016). Identifying the optimal integration time in Hamiltonian Monte Carlo. arXiv preprint arXiv:1601.00225.
-
Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593-1623. (arXiv)