Giter Club home page Giter Club logo

implicit_q_learning's People

Contributors

anair13 avatar ikostrikov2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

implicit_q_learning's Issues

Code for Behavior cloning policy

Could you please provide the code implementation related to BC in Table 1 of the paper? It looks like it gets great performance in walker2d-medium-expert-v2 dataset and is better than BC in other papers, e.g., Transformer Decision. Some description of the implementation would also be very useful for me. Thank you very much for your outstanding work.

The log_prob is not corrected

Hi,
Thanks for releasing the code. I noticed that in the policy network, you simply squash the mean with tanh without correcting the log-probability as, for example, SAC did in their parameterization of the policy. Will this cause bias to the estimation of the gradient of the policy?

base_dist = tfd.MultivariateNormalDiag(loc=means,

I'm debugging my implementation of IQL and XQL, and I'm not sure whether this causes the performance gap or not. Please correct me if there is any mis-understanding.

missing dextrous env

Hi, the binary-v0 envs seem to be missing in the codebase. Is it possible to release those with cmds to use them with IQL?

Potential issue in scaling rewards in train_finetune.py.

Hi @ikostrikov,

thanks again for sharing this.

I have some questions on a potential issue in train_fientune when working with mujoco environments.

I noticed that rewards are not scaled for hopper, halfcheetah or walker2d during online fine-tuning.
However, for these tasks, you normalized the rewards in the offline datasets. See e.g.,

normalize(dataset)

Is this intended or I have misunderstood something. Many thanks!

conflicting dependencies between optax and jaxlib

Hi, I got the following error when running pip install -r ./requirements.txt

The conflict is caused by:
optax 0.0.9 depends on jaxlib>=0.1.37
optax 0.0.8 depends on jaxlib>=0.1.37
optax 0.0.6 depends on jaxlib>=0.1.37

Could you please take a look? Thank you.

A small problem

Hi Ilya,

I have a small question about the orthogonal initialization of the policy function.

In pytorch's documentation, it uses a default gain of 5/3 for the tanh activation function.

If we set tanh_squash_distribution = False, then do we need to set the gain to 5/3 for the output layer in the policy network.

means = nn.Dense(self.action_dim, kernel_init=default_init())(outputs).

means = nn.tanh(means)

Anyway, this does not matter in practice.

A question about the `sample_actions()`

@functools.partial(jax.jit, static_argnames=('actor_def', 'distribution'))

Hi Ilya,

Many thanks for the nice work. I have a question of the sample_actions() function, why do we need the _sample_actions()? Isn't it redundant?

Maybe we can simply:

@functools.partial(jax.jit, static_argnames=('actor_def'))
def sample_actions(rng, actor_def, actor_params, observations, temperature):
    dist = actor_def.apply({'params': actor_params}, observations, temperature)
    rng, key = jax.random.split(rng)
    return rng, dist.sample(seed=key)

Further, I tried to reimplement IQL with TrainState. I found that use TrainState is slower than this implementation (~100-200 fps).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.