Comments (11)
Looks like the problem was that the refactored v0.2 code was missing the future entropy term. See #43. Closed with 99e080f.
In particular, here's the hopper plot that I got:
from rlkit.
Thank you for the speedy reply! I ran v0.2 with the default hyper-parameters. I’ll double check that the hyper-parameters I ran with match what you posted.
To confirm, the performance metric is logged to “evaluation/Returns Mean”, right?
Also, would you be so kind to share your plotting code? Did you have to do smoothing to get the solid blue line in your graph? If I plot “evaluation/Return Mean” averaged over 5 seeds directly, I got the hyper zigzag pattern in my graph.
from rlkit.
I'll run it again just to check. Yes, that's the correct metric. I used viskit for plotting and did temporal smoothing. I think the important thing is that the thick, shaded region is about the same width as yours.
from rlkit.
What hyperparameters did you run exactly? I got this after running over 5 seeds.
Also, are you using the latest code? I pushed v0.2 only 3 days ago, which includes a few changes that seem to have helped.
Some relevant hyperparams:
"batch_size": 256,
"layer_size": 256,
"max_path_length": 1000,
"min_num_steps_before_training": 1000,
"num_epochs": 3000,
"num_eval_steps_per_epoch": 5000,
"num_expl_steps_per_train_loop": 1000,
"num_trains_per_train_loop": 1000,
"replay_buffer_size": 1000000,
"discount": 0.99,
"policy_lr": 0.0003,
"qf_lr": 0.0003,
"reward_scale": 1,
"soft_target_tau": 0.005,
"target_update_period": 1,
"use_automatic_entropy_tuning": true
from rlkit.
Ah, I think the issue is that the paths are sometimes not exactly 1000
, e.g. if the agent terminates early. This biases the returns to look worse than they actually are, since it might average in a path that had only length 1 (and therefore a return of ~3). For example, by looking at the average rewards, we see that they're basically the same:
So, this seems like a bug in the logging/eval code, but not in the training (phew!). I'll push a fix soon.
from rlkit.
Thanks! I'll rerun the code and let you know how it goes.
from rlkit.
I'm getting the following now:
Can you smooth out the tensorflow results to see if they results are that different?
from rlkit.
It still looks worse than the tensorflow results unfortunately, especially near the end of training.
from rlkit.
Yeah, it's a bit different... It's not a big difference, but I'll look into it. The only difference I can think of is that I switched to batch training rather than online training, and I'm expecting to add support for online-mode soon.
from rlkit.
Okie, thanks so much!
from rlkit.
What hyperparameters did you run exactly? I got this after running over 5 seeds.
Also, are you using the latest code? I pushed v0.2 only 3 days ago, which includes a few changes that seem to have helped.
Some relevant hyperparams:
"batch_size": 256, "layer_size": 256, "max_path_length": 1000, "min_num_steps_before_training": 1000, "num_epochs": 3000, "num_eval_steps_per_epoch": 5000, "num_expl_steps_per_train_loop": 1000, "num_trains_per_train_loop": 1000, "replay_buffer_size": 1000000, "discount": 0.99, "policy_lr": 0.0003, "qf_lr": 0.0003, "reward_scale": 1, "soft_target_tau": 0.005, "target_update_period": 1, "use_automatic_entropy_tuning": true
Hi, I was wondering how to change the random seeds?
from rlkit.
Related Issues (20)
- unable to create the conda environment with linux-cpu-env.yml HOT 2
- Issue SMAC algorithm HOT 4
- multi-GPU optimised implementations for running algorithms HOT 1
- Doubt on Q-function loss in AWAC HOT 1
- Question about VAEPolicy in rlkit.torch.sac.policies HOT 2
- CustomMDPPathCollector is not found HOT 2
- Doubt on advantage calculation to update the policy on AWAC.
- Position Control with mujoco-py
- Cannot reproduce the results of IQL on antmaze HOT 1
- High Memory & Disk Requirement for SMAC HOT 1
- Skew-fit gaussian_identity_variance
- AWAC doesn't profit from offline data HOT 4
- IQL: make checkpoints public
- Could someone provide right environment installation procedure? HOT 4
- Python3.5 is not suitable for this project! HOT 1
- Why I could not see result file?
- SAC log_alpha different from paper HOT 1
- IQL results different with the paper HOT 1
- Reproduce and create figures results in AWAC.
- Download link is expired HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rlkit.