Giter Club home page Giter Club logo

jaxcql's Introduction

JaxCQL

A simple and modular implementation of the Conservative Q Learning and Soft Actor Critic algorithm in Jax and Flax.

This repository is a reimplementation of my other codebase of the same algorithms in Pytorch.

Installation

  1. Install and use the included Ananconda environment
$ conda env create -f environment.yml
$ source activate JaxCQL

You'll need to get your own MuJoCo key if you want to use MuJoCo.

  1. Add this repo directory to your PYTHONPATH environment variable.
export PYTHONPATH="$PYTHONPATH:$(pwd)"

Run Experiments

You can run SAC experiments using the following command:

python -m JaxCQL.sac_main \
    --env 'HalfCheetah-v2' \
    --logging.output_dir './experiment_output'

All available command options can be seen in JaxCQL/conservative_sac_main.py and JaxCQL/conservative_sac.py.

You can run CQL experiments using the following command:

python -m JaxCQL.conservative_sac_main \
    --env 'halfcheetah-medium-v0' \
    --logging.output_dir './experiment_output'

All available command options can be seen in JaxCQL/sac_main.py and JaxCQL/sac.py.

Visualize Experiments

You can visualize the experiment metrics with viskit:

python -m viskit './experiment_output'

and simply navigate to http://localhost:5000/

Weights and Biases Online Visualization Integration

This codebase can also log to W&B online visualization platform. To log to W&B, you first need to set your W&B API key environment variable:

export WANDB_API_KEY='YOUR W&B API KEY HERE'

Then you can run experiments with W&B logging turned on:

python -m JaxCQL.conservative_sac_main \
    --env 'halfcheetah-medium-v0' \
    --logging.output_dir './experiment_output' \
    --logging.online

Results of Running JaxCQL on D4RL Environments

In order to save your time and compute resources, I've done a sweep of JaxCQL on certain D4RL environments with various min Q weight values. The results can be seen here. You can choose the environment to visualize by filtering on env. The results for each cql.cql_min_q_weight on each env is repeated and average across 3 random seeds.

Citing

If you find this open source release useful, please reference in your paper:

@article{geng2022jaxcql,
  title={JaxCQL: a simple implementation of SAC and CQL in JAX},
  author={Xinyang Geng},
  year={2022},
  url={https://github.com/young-geng/JaxCQL}
}

Credits

The project organization is inspired by TD3. The SAC implementation is based on rlkit. THe CQL implementation is based on CQL. The viskit visualization is taken from viskit, which is taken from rllab.

jaxcql's People

Contributors

young-geng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

jaxcql's Issues

Does the jax version works on the `Antmaze` environment?

Hello Young,

I tried the parameters for antmaze mentioned in this issue young-geng/CQL#2.

It seems that that parameter does not work with the jax version.

python -m JaxCQL.conservative_sac_main \
    --env 'antmaze-medium-diverse-v2' \
    --cql.cql_min_q_weight=5.0 \
    --cql.cql_max_target_backup=True \
    --cql.cql_target_action_gap=0.2 \
    --orthogonal_init=True \
    --cql.cql_lagrange=True \
    --cql.cql_temp=1.0 \
    --cql.policy_lr=1e-4 \
    --cql.qf_lr=3e-4 \
    --cql.cql_clip_diff_min=-200 \
    --reward_scale=10.0 \
    --reward_bias=-5.0 \
    --policy_arch='256-256' \
    --qf_arch='256-256-256' \
    --policy_log_std_multiplier=0.0 \
    --eval_period=50 \
    --eval_n_trajs=100 \
    --n_epochs=1200 \
    --bc_epochs=40 \
    --logging.output_dir './experiment_output'

Here's the result:

epoch                           1199
sac/alpha                          0.154963
sac/alpha_loss                     0.584433
sac/average_qf1                 -389.608
sac/average_qf2                 -390.657
sac/average_target_q            -388.836
sac/cql/alpha_prime                0.352198
sac/cql/alpha_prime_loss          -0.171519
sac/cql/cql_min_qf1_loss           0.458265
sac/cql/cql_min_qf2_loss          -0.115226
sac/cql/cql_q1_current_actions  -388.344
sac/cql/cql_q1_next_actions     -389.941
sac/cql/cql_q1_rand             -421.931
sac/cql/cql_q2_current_actions  -389.583
sac/cql/cql_q2_next_actions     -390.998
sac/cql/cql_q2_rand             -424.662
sac/cql/cql_qf1_diff               0.460231
sac/cql/cql_qf2_diff               0.134567
sac/cql/cql_std_q1                17.317
sac/cql/cql_std_q2                18.0465
sac/log_pi                         8.31344
sac/policy_loss                  391.678
sac/qf1_loss                      21.8954
sac/qf2_loss                      16.0622
average_return                     0.81
average_traj_length              434.01
average_normalizd_return           0.81
train_time                         3.24994
eval_time                         32.1321
epoch_time                        35.3821

Do we need to stop gradient when computing `alpha_loss` and `policy_loss`?

Hi Young,

I noticed that in the dopamine's implementation, it stops gradient for alpha when computing the policy loss to avoid the gradient changing the value of alpha.

alpha_value = jnp.exp(jax.lax.stop_gradient(log_alpha))
policy_loss = jnp.mean(alpha_value * action_log_prob - no_grad_q_value)

https://github.com/google/dopamine/blob/1cbed6c7c35163e73b0ee2f2d2bf032b057570dd/dopamine/jax/agents/sac/sac_agent.py#L170

Similarly, it stops gradient for the policy when computing the alpha loss to avoid affecting the policy.

alpha_loss = jnp.mean(log_alpha * jax.lax.stop_gradient(entropy_diff))

https://github.com/google/dopamine/blob/1cbed6c7c35163e73b0ee2f2d2bf032b057570dd/dopamine/jax/agents/sac/sac_agent.py#L175

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.