frt03 / generalized_dt Goto Github PK
View Code? Open in Web Editor NEWGeneralized Decision Transformer for Offline Hindsight Information Matching (ICLR2022)
Home Page: https://arxiv.org/abs/2111.10364
Generalized Decision Transformer for Offline Hindsight Information Matching (ICLR2022)
Home Page: https://arxiv.org/abs/2111.10364
Hi, May I ask is it possible to release the code based on jax.
Best
Hi, I am interested in training a model on backflip halfcheetah for my work. I have tried training PPO, however I have not been able to achieve results as on gifs, usually the agent just jumps backwards, no flips.
The paper says that SAC was trained. Do you still have the hyperparameters for the expert with which the trajectories were collected? I would be happy if they were made public (or even dataset).
Thanks!
Hi,
Many thanks for the implementation!
My understanding of DT is that the DT policy chooses an action given the previous K states and returns-to-go. However, as far as I can see, in your code (and also in the original DT code), the action_preds calculated in the forward() method of DT only uses x[:,1] as input, which corresponds to the states. I also noticed that you use x[0, :] to get the return predictions rather than x[:, 2] as used by the original DT paper.
Should the input not be x[:,0] (returns R_1, R_2, ...) and x[:, 1] (states S_1, S_2, ...)?
Why do you change x[:, 2] to x[:, 0] to get the return predictions?
Also, am I correct that the way you would sample a discrete single action 'choice' at evaluation time from action_preds is by indexing action_preds[0, -1] to get the 'action predictions' of each possible action for the current time step, and then calling torch.max(action_preds[0, -1])? I.e. similar to 'greedy sampling' from the action predictions in standard RL.
Hi, Thank you so much for the super interesting work and for releasing your code.
A question regarding padding tokens: They seem to be handled slightly differently in different parts of the code. When loading the data to run the experiments, it appears that padding token values are informed by the environment characteristics (e.g. -10 for actions in mujoco, 2 for dones, and 0 for other types of tokens) both in BDT and CDT. However, on the model side for action prediction, all padding tokens are zeros. We were unsure about the reason behind this difference in representing padding tokens. However, we inferred that since the attention mask reflects the position of padding tokens, that would override these slight differences ultimately.
Could you please let us know more about your implementation of padding tokens and why they are represented differently? And do their actual values matter when their position is reflected in the attention mask?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.