This repository serves as an extension to the OmniIsaacGymEnvs framework, enhancing its capabilities with additional robots and advanced features. The primary goal is to provide a more flexible environment for deploying various code-generated robotic and performing complex navigation tasks.
- Extended Robot Library: Introduces new robots to the existing OmniIsaacGymEnvs framework, enabling a wider range of simulations and applications.
- Navigation Tasks: Includes a variety of navigation tasks designed to test and benchmark robotic performance in different scenarios.
- Modular Extensions: Supports features like domain randomization to improve robustness and automated curriculum learning to optimize training processes.
2D Satellite | 3D Satellite | Heron USV | Turtle-bots | Husky car |
---|---|---|---|---|
- 2D Satellite: Simulates basic satellite maneuvers in a 2D plane.
- 3D Satellite: Extends satellite control to 3D space for more complex operations.
- Heron USV: A surface vessel used for aquatic navigation tasks.
- Turtle-bots: Compact mobile robots suitable for indoor navigation.
- Husky: A rugged, all-terrain robot for outdoor navigation.
This library provides a set of predefined navigation tasks for robotic control and reinforcement learning. It allows for easy extensions to add new tasks or modify existing ones to suit different requirements.
- GoToPosition
- GoToPose
- Track Linear Velocity
- Track Linear & Angular Velocity
- Track Linear Velocity & Heading
- GoThroughPosition / Sequence
- GoThroughPose / Sequence
- GoThroughGate / Sequence
Follow the Isaac Sim documentation to install the latest Isaac Sim release.
Examples in this repository rely on features from the most recent Isaac Sim release. Please make sure to update any existing Isaac Sim build to the latest release version, 2023.1.1, to ensure examples work as expected.
Once installed, this repository can be used as a python module, omniisaacgymenvs
, with the python executable provided in Isaac Sim.
To install omniisaacgymenvs
, first clone this repository:
git clone https://github.com/elharirymatteo/RANS.git
Once cloned, locate the python executable in Isaac Sim. By default, this should be python.sh
. We will refer to this path as PYTHON_PATH
.
To set a PYTHON_PATH
variable in the terminal that links to the python executable, we can run a command that resembles the following. Make sure to update the paths to your local path.
For Linux: alias PYTHON_PATH=~/.local/share/ov/pkg/isaac_sim-*/python.sh
For Windows: doskey PYTHON_PATH=C:\Users\user\AppData\Local\ov\pkg\isaac_sim-*\python.bat $*
For IsaacSim Docker: alias PYTHON_PATH=/isaac-sim/python.sh
Install omniisaacgymenvs
as a python module for PYTHON_PATH
:
PYTHON_PATH -m pip install -e .
We use the rl-games library as a starting point to rework the PPO implementation for the agents we train.
To install the appropriate version of rl-games, clone this repository INSIDE RANS:
git clone https://github.com/AntoineRichard/rl_games
from inside the RANS folder:
cd rl_games
PYTHON_PATH -m pip install --upgrade pip
PYTHON_PATH -m pip install -e .
Note: All commands should be executed from OmniIsaacGymEnvs/omniisaacgymenvs
.
Training new agents
To train your first policy, (example for the USV robot) run:
PYTHON_PATH scripts/rlgames_train_RANS.py task=ASV/GoToPose train=RANS/PPOcontinuous_MLP headless=True num_envs=1024
Modify num_envs appropriately to scale with your current machine capabilities. Turn headless to False
if you want to visualize the envs while training occurs.
You should see an Isaac Sim window pop up. Once Isaac Sim initialization completes, the scene for the selected robot will be constructed and simulation will start running automatically. The process will terminate once training finishes.
Here's another example - GoToPose for the Satellite robot (MFP - modular floating platform) - using the multi-threaded training script:
PYTHON_PATH scripts/rlgames_train_RANS.py task=MFP2D/GoToPose train=RANS/PPOmulti_discrete_MLP
Note that by default, we show a Viewport window with rendering, which slows down training. You can choose to close the Viewport window during training for better performance. The Viewport window can be re-enabled by selecting Window > Viewport
from the top menu bar.
To achieve maximum performance, launch training in headless
mode as follows:
PYTHON_PATH scripts/rlgames_train_RANS.py task=MFP2D/GoToPose train=PPOmulti_discrete_MLP headless=True
Some of the examples could take a few minutes to load because the startup time scales based on the number of environments. The startup time will continually be optimized in future releases.
Loading trained models (or checkpoints)
Checkpoints are saved in the folder runs/EXPERIMENT_NAME/nn
where EXPERIMENT_NAME
defaults to the task name, but can also be overridden via the experiment
argument.
To load a trained checkpoint and continue training, use the checkpoint
argument:
PYTHON_PATH scripts/rlgames_train_RANS.py task=MFP2D/GoToPose train=RANS/PPOmulti_discrete_MLP checkpoint=runs/MFP2D_GoToPose/nn/MFP2D_GoToPose.pth
To load a trained checkpoint and only perform inference (no training), pass test=True
as an argument, along with the checkpoint name. To avoid rendering overhead, you may
also want to run with fewer environments using num_envs=64
:
PYTHON_PATH scripts/rlgames_train_RANS.py task=MFP2D/GoToPose train=RANS/PPOmulti_discrete_MLP checkpoint=runs/MFP2D_GoToPose/nn/MFP2D_GoToPose.pth test=True num_envs=64
Note that if there are special characters such as [
or =
in the checkpoint names,
you will need to escape them and put quotes around the string. For example,
checkpoint="runs/Ant/nn/last_Antep\=501rew\[5981.31\].pth"
All scripts provided in omniisaacgymenvs/scripts
can be launched directly with PYTHON_PATH
.
Random policy
To test out a task without RL in the loop, run the random policy script with:PYTHON_PATH scripts/random_policy.py task=MFP2D/GoToPose
This script will sample random actions from the action space and apply these actions to your task without running any RL policies. Simulation should start automatically after launching the script, and will run indefinitely until terminated.
Train on single GPU
To run a simple form of PPO from `rl_games`, use the single-threaded training script:PYTHON_PATH scripts/rlgames_train_RANS.py task=MFP2D/GoToPosition
This script creates an instance of the PPO runner in rl_games
and automatically launches training and simulation. Once training completes (the total number of iterations have been reached), the script will exit. If running inference with test=True checkpoint=<path/to/checkpoint>
, the script will run indefinitely until terminated. Note that this script will have limitations on interaction with the UI.
Train on multiple GPUs
TBDConfiguration and command line arguments
We use Hydra to manage the config.
Common arguments for the training scripts are:
task=TASK
- Selects which task to use. Examples includeMFP2D/GoToPosition
,MFP2D/GoToPose
,MFP2D/TrackLinearVelocity
,MFP2D/TrackLinearAngularVelocity
,MFP3D/GoToPosition
,MFP3D/GoToPose
. These correspond to the config for each environment in the folderomniisaacgymenvs/cfg/task/
+MFP2D
or any robot description (eg.ASV
for the boat orAGV
for the turtle-bots).train=TRAIN
- Selects which training config to use. Will automatically default to the correct config for the environment file inside thetrain/RANS
folder (e.g.,PPOcontinuous_MLP
orPPOmulti_discrete_MLP
).num_envs=NUM_ENVS
- Selects the number of environments to use (overriding the default number of environments set in the task config).seed=SEED
- Sets a seed value for randomization, overriding the default seed in the task config.pipeline=PIPELINE
- Which API pipeline to use. Defaults togpu
, can also be set tocpu
. When using thegpu
pipeline, all data stays on the GPU. When using thecpu
pipeline, simulation can run on either CPU or GPU, depending on thesim_device
setting, but a copy of the data is always made on the CPU at every step.sim_device=SIM_DEVICE
- Device used for physics simulation. Set togpu
(default) to use GPU and tocpu
for CPU.device_id=DEVICE_ID
- Device ID for GPU to use for simulation and task. Defaults to0
. This parameter will only be used if simulation runs on GPU.rl_device=RL_DEVICE
- Which device / ID to use for the RL algorithm. Defaults tocuda:0
, and follows PyTorch-like device syntax.test=TEST
- If set toTrue
, only runs inference on the policy and does not do any training.checkpoint=CHECKPOINT_PATH
- Path to the checkpoint to load for training or testing.headless=HEADLESS
- Whether to run in headless mode.experiment=EXPERIMENT
- Sets the name of the experiment.max_iterations=MAX_ITERATIONS
- Sets how many iterations to run for. Reasonable defaults are provided for the provided environments.warp=WARP
- If set toTrue
, launch the task implemented with Warp backend (Note: not all tasks have a Warp implementation).kit_app=KIT_APP
- Specifies the absolute path to the kit app file to be used.
Hydra also allows setting variables inside config files directly as command line arguments. For example, to set the minibatch size for an rl_games training run, you can use train.params.config.minibatch_size=64
. Similarly, variables in task configs can also be set, such as task.env.episodeLength=100
.
Default values for each of these are found in the omniisaacgymenvs/cfg/config.yaml
file.
The way that the task
and train
portions of the config works are through the use of config groups.
You can learn more about how these work here
The actual configs for task
are in omniisaacgymenvs/cfg/task/<TASK>.yaml
and for train
in omniisaacgymenvs/cfg/train/<TASK>PPO.yaml
.
In some places in the config you will find other variables referenced (for example,
num_actors: ${....task.env.numEnvs}
). Each .
represents going one level up in the config hierarchy.
This is documented fully here.
Tensorboard can be launched during training via the following command:
PYTHON_PATH -m tensorboard.main --logdir runs/EXPERIMENT_NAME/summaries
You can run (WandB)[https://wandb.ai/] with OmniIsaacGymEnvs by setting wandb_activate=True
flag from the command line. You can set the group, name, entity, and project for the run by setting the wandb_group
, wandb_name
, wandb_entity
and wandb_project
arguments. Make sure you have WandB installed in the Isaac Sim Python executable with PYTHON_PATH -m pip install wandb
before activating.
If you use the current repository in your work, we suggest citing the following papers:
@article{el2023drift,
title={DRIFT: Deep Reinforcement Learning for Intelligent Floating Platforms Trajectories},
author={El-Hariry, Matteo and Richard, Antoine and Muralidharan, Vivek and Yalcin, Baris Can and Geist, Matthieu and Olivares-Mendez, Miguel},
journal={arXiv preprint arXiv:2310.04266},
year={2023}
}
@article{el2023rans,
title={RANS: Highly-Parallelised Simulator for Reinforcement Learning based Autonomous Navigating Spacecrafts},
author={El-Hariry, Matteo and Richard, Antoine and Olivares-Mendez, Miguel},
journal={arXiv preprint arXiv:2310.07393},
year={2023}
}
.
├── cfg # Configuration files
│ ├── controller # Controller configurations
│ └── hl_task # High-level task configurations convergence task
│ └── train # Training configurations
│ └── MFP # Training configurations for Modular Floating Platform
├── demos # Demonstration files (e.g., gifs, videos)
├── doc # Documentation files
│ ├── curriculum.md # Documentation for curriculum
│ ├── domain_randomization.md # Documentation for domain randomization
│ ├── figures # Figures used in documentation
│ │ └── ... # Other figure files
│ └── penalties.md # Documentation for penalties
├── envs # Environment scripts
│ ├── vec_env_rlgames_mfp.py # Vectorized environment for rlgames with MFP
│ ├── vec_env_rlgames_mt.py # Vectorized environment for rlgames with multiple tasks
│ └── vec_env_rlgames.py # General vectorized environment for rlgames
├── extension.py # Extension script
├── images # Image files
│ ├── 3dof_gotoxy.png # Image for 3DOF GoToXY task
│ └── ... # Other image files
├── __init__.py # Initialization script for the package
├── lab_tests # Lab test scripts and data
├── mj_runs # Mujoco run scripts and data
├── models # Model files
├── robots # Robot related files
│ ├── articulations # Articulation files for robots
│ ├── sensors # Sensor files for robots
│ └── usd # USD files for robots
├── ros # ROS related files
├── scripts # Utility scripts
├── tasks # Task implementations
│ └── MFP # Task implementations for Modular Floating Platform
│ ├── curriculum_helpers.py # Helper functions for curriculum
│ └── unit_tests # Unit tests for MFP tasks
├── utils # Utility functions and scripts
│ ├── aggregate_and_eval_mujoco_batch_data.py # Script to aggregate and evaluate Mujoco batch data
│ ├── rlgames # RL games related utilities
│ │ ├── __pycache__ # Compiled Python files
│ │ ├── rlgames_train_mt.py # Training script for RL games with multiple tasks
│ │ └── rlgames_utils.py # Utility functions for RL games
├── videos # Video files
└── wandb # Weights and Biases integration files