ikostrikov / implicit_q_learning Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Could you please provide the code implementation related to BC in Table 1 of the paper? It looks like it gets great performance in walker2d-medium-expert-v2 dataset and is better than BC in other papers, e.g., Transformer Decision. Some description of the implementation would also be very useful for me. Thank you very much for your outstanding work.
Hi,
Thanks for releasing the code. I noticed that in the policy network, you simply squash the mean with tanh
without correcting the log-probability as, for example, SAC did in their parameterization of the policy. Will this cause bias to the estimation of the gradient of the policy?
Line 56 in 09d7002
I'm debugging my implementation of IQL and XQL, and I'm not sure whether this causes the performance gap or not. Please correct me if there is any mis-understanding.
Hi, the binary-v0 envs seem to be missing in the codebase. Is it possible to release those with cmds to use them with IQL?
Hi @ikostrikov,
thanks again for sharing this.
I have some questions on a potential issue in train_fientune when working with mujoco environments.
I noticed that rewards are not scaled for hopper, halfcheetah or walker2d during online fine-tuning.
However, for these tasks, you normalized the rewards in the offline datasets. See e.g.,
implicit_q_learning/train_finetune.py
Line 82 in 09d7002
Is this intended or I have misunderstood something. Many thanks!
Hi, I got the following error when running pip install -r ./requirements.txt
The conflict is caused by:
optax 0.0.9 depends on jaxlib>=0.1.37
optax 0.0.8 depends on jaxlib>=0.1.37
optax 0.0.6 depends on jaxlib>=0.1.37
Could you please take a look? Thank you.
In the file train_finetune, this code schedule_fn = optax.cosine_decay_schedule(-actor_lr, max_steps) seems to use a positive learning rate?why?
Hi Ilya,
I have a small question about the orthogonal initialization of the policy function.
In pytorch's documentation, it uses a default gain of 5/3 for the tanh
activation function.
If we set tanh_squash_distribution = False
, then do we need to set the gain to 5/3
for the output layer in the policy network.
means = nn.Dense(self.action_dim, kernel_init=default_init())(outputs)
.
Line 54 in 09d7002
Anyway, this does not matter in practice.
Hi Ilya,
May I ask about the toy umaze environment in Figure 2.
Is this a self-defined environment, or is it the antmaze-umaze
environment in D4RL?
And how do we generate the offline dataset?
Many thanks.
Line 66 in 09d7002
Hi Ilya,
Many thanks for the nice work. I have a question of the sample_actions()
function, why do we need the _sample_actions()
? Isn't it redundant?
Maybe we can simply:
@functools.partial(jax.jit, static_argnames=('actor_def'))
def sample_actions(rng, actor_def, actor_params, observations, temperature):
dist = actor_def.apply({'params': actor_params}, observations, temperature)
rng, key = jax.random.split(rng)
return rng, dist.sample(seed=key)
Further, I tried to reimplement IQL with TrainState
. I found that use TrainState
is slower than this implementation (~100-200 fps).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.