Giter Club home page Giter Club logo

Comments (11)

vitchyr avatar vitchyr commented on September 21, 2024 2

Looks like the problem was that the refactored v0.2 code was missing the future entropy term. See #43. Closed with 99e080f.

In particular, here's the hopper plot that I got:

image

from rlkit.

quanvuong avatar quanvuong commented on September 21, 2024 1

Thank you for the speedy reply! I ran v0.2 with the default hyper-parameters. I’ll double check that the hyper-parameters I ran with match what you posted.

To confirm, the performance metric is logged to “evaluation/Returns Mean”, right?

Also, would you be so kind to share your plotting code? Did you have to do smoothing to get the solid blue line in your graph? If I plot “evaluation/Return Mean” averaged over 5 seeds directly, I got the hyper zigzag pattern in my graph.

from rlkit.

vitchyr avatar vitchyr commented on September 21, 2024 1

I'll run it again just to check. Yes, that's the correct metric. I used viskit for plotting and did temporal smoothing. I think the important thing is that the thick, shaded region is about the same width as yours.

from rlkit.

vitchyr avatar vitchyr commented on September 21, 2024

What hyperparameters did you run exactly? I got this after running over 5 seeds.
image

Also, are you using the latest code? I pushed v0.2 only 3 days ago, which includes a few changes that seem to have helped.

Some relevant hyperparams:

  "batch_size": 256,
  "layer_size": 256,
  "max_path_length": 1000,
  "min_num_steps_before_training": 1000,
  "num_epochs": 3000,
  "num_eval_steps_per_epoch": 5000,
  "num_expl_steps_per_train_loop": 1000,
  "num_trains_per_train_loop": 1000,
  "replay_buffer_size": 1000000,
  "discount": 0.99,
  "policy_lr": 0.0003,
  "qf_lr": 0.0003,
  "reward_scale": 1,
  "soft_target_tau": 0.005,
  "target_update_period": 1,
  "use_automatic_entropy_tuning": true

from rlkit.

vitchyr avatar vitchyr commented on September 21, 2024

Ah, I think the issue is that the paths are sometimes not exactly 1000, e.g. if the agent terminates early. This biases the returns to look worse than they actually are, since it might average in a path that had only length 1 (and therefore a return of ~3). For example, by looking at the average rewards, we see that they're basically the same:

image

So, this seems like a bug in the logging/eval code, but not in the training (phew!). I'll push a fix soon.

from rlkit.

quanvuong avatar quanvuong commented on September 21, 2024

Thanks! I'll rerun the code and let you know how it goes.

from rlkit.

vitchyr avatar vitchyr commented on September 21, 2024

I'm getting the following now:
image
Can you smooth out the tensorflow results to see if they results are that different?

from rlkit.

quanvuong avatar quanvuong commented on September 21, 2024

It still looks worse than the tensorflow results unfortunately, especially near the end of training.

TSAC Hopper-v2 TF

from rlkit.

vitchyr avatar vitchyr commented on September 21, 2024

Yeah, it's a bit different... It's not a big difference, but I'll look into it. The only difference I can think of is that I switched to batch training rather than online training, and I'm expecting to add support for online-mode soon.

from rlkit.

quanvuong avatar quanvuong commented on September 21, 2024

Okie, thanks so much!

from rlkit.

ZhenhuiTang avatar ZhenhuiTang commented on September 21, 2024

What hyperparameters did you run exactly? I got this after running over 5 seeds. image

Also, are you using the latest code? I pushed v0.2 only 3 days ago, which includes a few changes that seem to have helped.

Some relevant hyperparams:

  "batch_size": 256,
  "layer_size": 256,
  "max_path_length": 1000,
  "min_num_steps_before_training": 1000,
  "num_epochs": 3000,
  "num_eval_steps_per_epoch": 5000,
  "num_expl_steps_per_train_loop": 1000,
  "num_trains_per_train_loop": 1000,
  "replay_buffer_size": 1000000,
  "discount": 0.99,
  "policy_lr": 0.0003,
  "qf_lr": 0.0003,
  "reward_scale": 1,
  "soft_target_tau": 0.005,
  "target_update_period": 1,
  "use_automatic_entropy_tuning": true

Hi, I was wondering how to change the random seeds?

from rlkit.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.