A high-performance reinforcement learning library in jax specialized for robotic learning. Works out of the box with jax based environments/engines like Gymnax and Brax, as well as non-jax based environments/engines like Mujoco, SAPIEN, MetaWorld etc.
How is it specialized? It includes popular algorithms often used in robotic learning research like PPO and SAC, as well as eventually supporting architectures and workflows often used for visual / 3D RL like transformers, point nets, etc. It further will include more robotics specific approaches such as Transporter Networks (Zeng et al., 2020).
If you use robojax in your work, please cite this repository as so:
@misc{robojax,
author = {Tao, Stone},
month = {3},
title = {{Robojax: Jax based Reinforcement Learning Algorithms and Tools}},
url = {https://github.com/StoneT2000/robojax},
year = {2023}
}
It's highly recommended to use mamba/conda. Otherwise you can try and install all the packages yourself (at your own risk of not getting reproducible results)
conda env create -f environment.yml
Jax must be separately installed. To install jax with cuda support, follow the instructions on their README.
pip install --upgrade "jax[cuda11_pip]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
cd docker
docker build -t stonet2000/robojax .
The following modules are usually shared between RL algorithms
- JaxLoop / Environment Loop for collecting rollouts
- Evaluation Protocol for evaluating agent during training
- Loggers for logging training and test data
- Generic
Model
interface
Everything else is usually kept inside the RL algorithm module e.g. robojax.agents.ppo
contains the PPO Config, Actor Critic models, loss functions etc. all separate from e.g. robojax.agents.sac
.
Each algorithm/agent comes equipped with a env loop and optionally a eval env loop for training and evaluation. We expect environments used already have truncation and auto reset in them.
During training they sample from the loop for some number of steps then update the policy and repeat.
Creating an instance of a Agent e.g. SAC
will initialize a starting train_state and set the configuration. All functionality is completely dependent on only the current train state and the stored configuration.
train
- Reset environment, If
self.train_state
is None, initialize it. Train from there. - Call
train_step
which returns a newTrainState
andTrainStepMetrics
structs. - Optionally evaluate model on eval envs (optionally jittable), returning a
EvalMetrics
struct. - Log
TrainStepMetrics
andEvalMetrics
train_step
- Collect interaction data (optionally jittable)
- Update using the interaction data (and potentially older data e.g. in SAC) (jittable)
TimeLimits and Truncations - Wrapper
Auto Reset - Wrapper
Vectorization - Env looper
See https://wandb.ai/stonet2000/robojax?workspace=user-stonet2000 for all benchmarked results on this library
To benchmark the code yourself, see https://github.com/StoneT2000/robojax/tree/main/scripts/baselines.md