Linux |
Mac OS X |
---|---|
This package provides a core interface for working with Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). For examples, please see POMDPExamples and the Gallery.
Our goal is to provide a common programming vocabulary for:
- Expressing problems as MDPs and POMDPs.
- Writing solver software.
- Running simulations efficiently.
There are nested interfaces for expressing and interacting with (PO)MDPs: When the explicit interface is used, the transition and observation probabilities are explicitly defined using api functions or tables; when the generative interface is used, only a single step simulator (e.g. (s', o, r) = G(s,a)) needs to be defined.
For help, please post to the Google group, or on gitter. Check releases for information on changes.
POMDPs.jl and all packages in the JuliaPOMDP project are fully supported on Linux and OS X. Windows support is available for all native Julia packages*.
To install POMDPs.jl, run the following from the Julia REPL:
Pkg.add("POMDPs")
To install supported JuliaPOMDP packages including various solvers, first run:
using POMDPs
POMDPs.add_registry()
This installs the JuliaPOMDP registry so that the Julia package manager can find all the available solvers and support packages.
To check available JuliaPOMDP packages, run:
using POMDPs
POMDPs.available()
To install a particular solver (say SARSOP.jl
), having installed the Registry, run:
Pkg.add("SARSOP")
To run a simple simulation of the classic Tiger POMDP using a policy created by the QMDP solver.
using POMDPs, POMDPModels, POMDPSimulators, QMDP
pomdp = TigerPOMDP()
# initialize a solver and compute a policy
solver = QMDPSolver() # from QMDP
policy = solve(solver, pomdp)
belief_updater = updater(policy) # the default QMDP belief updater (discrete Bayesian filter)
# run a short simulation with the QMDP policy
history = simulate(HistoryRecorder(max_steps=10), pomdp, policy, belief_updater)
# look at what happened
for (s, b, a, o) in eachstep(history, "sbao")
println("State was $s,")
println("belief was $b,")
println("action $a was taken,")
println("and observation $o was received.\n")
end
println("Discounted reward was $(discounted_reward(history)).")
For more examples with visualization see POMDPGallery.jl.
Several tutorials are hosted in the POMDPExamples repository.
Detailed documentation can be found here.
Many packages use the POMDPs.jl interface, including MDP and POMDP solvers, support tools, and extensions to the POMDPs.jl interface.
POMDPs.jl itself contains only the interface for communicating about problem definitions. Most of the functionality for interacting with problems is actually contained in several support tools packages:
Package |
Build |
Coverage |
---|---|---|
POMDPModelTools | ||
BeliefUpdaters | ||
POMDPPolicies | ||
POMDPSimulators | ||
POMDPModels | ||
POMDPTesting | ||
ParticleFilters | ||
RLInterface |
Package |
Build/Coverage |
Online/ Offline |
Continuous States |
Continuous Actions |
---|---|---|---|---|
Value Iteration | Offline | N | N | |
Local Approximation Value Iteration | Offline | Y | N | |
Monte Carlo Tree Search | Online | Y (DPW) | Y (DPW) |
Package |
Build/Coverage |
Online/ Offline |
Continuous States |
Continuous Actions |
Continuous Observations |
---|---|---|---|---|---|
QMDP | Offline | N | N | N | |
FIB | Offline | N | N | N | |
SARSOP* | Offline | N | N | N | |
BasicPOMCP | Online | Y | N | N1 | |
ARDESPOT | Online | Y | N | N1 | |
MCVI | Offline | Y | N | Y | |
POMDPSolve* | Offline | N | N | N | |
IncrementalPruning | Offline | N | N | N | |
POMCPOW | Online | Y | Y2 | Y | |
AEMS | Online | N | N | N |
1: Will run, but will not converge to optimal solution
2: Will run, but convergence to optimal solution is not proven, and it will likely not work well on multidimensional action spaces
Package |
Build/Coverage |
Continuous States |
Continuous Actions |
---|---|---|---|
TabularTDLearning | N | N | |
DeepQLearning | Y1 | N |
1: For POMDPs, it will use the observation instead of the state as input to the policy. See RLInterface.jl for more details.
These packages were written for POMDPs.jl in Julia 0.6 and have not been updated to 1.0 yet.
Package |
Build |
Coverage |
---|---|---|
DESPOT |
Package |
---|
DESPOT |
*These packages require non-Julia dependencies
If POMDPs is useful in your research and you would like to acknowledge it, please cite this paper:
@article{egorov2017pomdps,
author = {Maxim Egorov and Zachary N. Sunberg and Edward Balaban and Tim A. Wheeler and Jayesh K. Gupta and Mykel J. Kochenderfer},
title = {{POMDP}s.jl: A Framework for Sequential Decision Making under Uncertainty},
journal = {Journal of Machine Learning Research},
year = {2017},
volume = {18},
number = {26},
pages = {1-5},
url = {http://jmlr.org/papers/v18/16-300.html}
}