Comments (16)
Yes interesting... something is getting lost between the version that is checked in to Google3 and the settings you are running.
@yotam @aslanides and I will have a look into this...
Going to keep this open for now and try to reproduce this...
from bsuite.
Hi Peter!
Thanks for raising this... I think we might have seen some slippage in agent performance.
I'm not sure if this has come from updates to:
- Tensorflow 1 --> 2
- Different parameter settings
- Changes to random seed
My suspicion is there are some small details in this migration TF1->TF2 that changed some scores (the agents aren't exactly the same).
We will look into this and then re-run baselines with updated numbers.
Many thanks,
Ian
from bsuite.
Hello again!
I have just run the agents checked in at HEAD and I did not see your observed scoring...
We may need to tool in some more continuous testing, but the scores on mnist in particular seem "off" for the DQN implementation.
Can you confirm this is still an issue for you?
Is this poor performance for your implementation of DQN, or the baseline implementation we provide?
from bsuite.
I also have a similar observation concerning MountainCar, however it is related to the AC algorithm. To me it seems like there is a major difference between the results reported in the paper (close to 1) versus the ones in this thread (close to 0). I have also tried running actor_critic_rnn on mountain_car and it does not seem to learn (on default hyper params).
from bsuite.
Yes @mklissa - I see that difference above.
There have been several moving pieces:
- Changes to the bsuite environment (at release MountainCar had gravity the wrong direction)
- Changes to agent (migrate from TF1 -> TF2/JAX)
However, I think the best approach is to go from what is at HEAD and start a new issue to update paper/reference colabs to incorporate this bug fix.
from bsuite.
Dear Ian,
Thanks for looking into this! Back in march, I observed poor performance on mnist with both the baseline implementation as well as my own implementation of DQN.
Given that mnist seems to work perfectly fine for you, I assume there must be some problem on my side. I will set up a system from scratch and run the baseline implementation of dqn again. It might take a while though until I find time to do that.
Best regards,
Peter
from bsuite.
Hi,
I finally found some time to look into the issue. On my laptop, the performance of the baselines tensorflow DQN agent is still quite bad (with a score somewhere around 0.25 in the bar plots above).
I used a fresh install of pop!_os 20.04 (distribution based on Ubuntu) and then performed as few steps as possible to run the agent:
- Installed Anaconda
- Created a Conda environment with Python 3.7
- Steps from the bsuite github page:
- pip install --upgrade pip setuptools
- pip install bsuite
- pip install bsuite[baselines]
- Opened bsuite/bsuite/baselines/tf/dqn/run.py, changed the bsuite_id to 'SWEEP' and set the 'verbose' flag to False
- python run.py
Hope this helps.
Best regards,
Peter
from bsuite.
Ah... OK well I think in order to get the claimed performance, you need to run the dqn.default_agent()
I can see that this is a bit confusing, but we wanted to expose the flags as an easy way for people to tinker!
If you instead fo to baselines/tf/run.py then you should be able to get the same behaviour... is that right?
BTW... do you think we should instead remove the flag options and avoid this kind of confusion?
from bsuite.
I would keep the flag options but maybe have the same values as the default agent as default.
As you suggested, I replaced the agent (that uses the flags) with the dqn.default_agent() in run.py and ran the experiments again. Unfortunately no improvement on the mnist experiments.
@mklissa you said you observed something similar on MountainCar. Did the mnist experiments work for you? If I'm the only one experiencing this problem, then there might just be some issue on my side.
Best regards,
Peter
from bsuite.
Hey there, I'm writing to report that I'm also experiencing the same problem as @pluebcke in MNIST. I couldn't replicate the good MNIST results as reported in the paper. I also noticed a bad performance (0.2-0.26 score at most) using PPO and DQN agents from an external library (stable-baselines), tried different hyper-parameters, number of layers/neurons, activation functions, with no effect. I also checked the MNIST env implementation offered here and seemed OK to me.
Today I created a new virtual env with the latest BSuite version with the baselines, runned the 20 seeds twice and the baseline DQN agent also scored 0.23. This also happens with noise and scale variants.
from bsuite.
Hi @jbarsce - I'm not sure I understand the question.
So, are you saying that:
(a) The TF DQN checked into bsuite.baselines is not solving the bandit task for you?
(b) Another agent is unable to solve the MNIST task?
We have some tools for testing this internally within Google/DeepMind... and based on that I'm confident that the bsuite/baselines/jax/dqn and bsuite/baselines/tf/dqn do reproduce the performance.
However... we clearly need to work out a way to share these tests/reproducibility/installation instructions so that this confusion does not arise.
from bsuite.
For reference, here is a record of the nightly runs for the TF baselines.
You can see that some of the experiments are a little noisy... but that the DQN TF is consistently reproducing the MNIST results abov.
from bsuite.
Hi Ian, thanks for the quick reply! yes, I ran the BSuite experiments with another DQN agent and noticed that, while the other envs performed similar than in the accompanying paper, MNIST was the only that underperformed.
As this external agent had several variations, I tried to replicate the results with the DQN agent from this repo, trying tf and jax and isolating them in a new virtual environment. In case they are of any help, the following were the steps I followed (I took them from the jax repo and from here)
- Created a conda environment with python==3.6
pip install --upgrade pip setuptools
pip install bsuite[baselines]
For tensorflow 2.1
I ran the experiments with
python bsuite/bsuite/baselines/tf/dqn/run.py --bsuite_id=MNIST
For jax
pip install git+https://github.com/deepmind/dm-haiku
pip install --upgrade jax jaxlib
pip install git+git://github.com/deepmind/optax.git
pip install git+git://github.com/deepmind/rlax.git
python bsuite/bsuite/baselines/jax/dqn/run.py --bsuite_id=MNIST
Environment: Ubuntu 18.04 bionic
Please let me know if you need any other information. Finally, thanks for this great repository
Juan
from bsuite.
Just a wild guess, maybe something went wrong with the download of the input mnist dataset for Juan and me?
from bsuite.
Repro in ~10 lines (excluding imports): https://colab.research.google.com/drive/1XtTv-p2bXfvMBT_77cWjWRHPXIvimWlO?usp=sharing
from bsuite.
Related Issues (20)
- environment/experiment with continuous action space, Box() HOT 1
- Rendering control environments HOT 2
- Cannot import Random HOT 2
- bsuite_tutorial problem when build PPO OpenAI baseline agent HOT 1
- How is the 'generalization' score computed? HOT 1
- setup.py broken after last commit HOT 2
- dependency on trfl breaks TF2 HOT 2
- Question about DQN's loss HOT 1
- Using the agent's RNG, and not numpy's, to select actions HOT 1
- Importing ABC directly from collections will be removed in Python 3.10 HOT 2
- Documentation: Clarify mapping from high-level agent properties to experiments and environments HOT 2
- The signature for `update` does not allow for sarsa or n-step methods? HOT 1
- Environment seeding HOT 1
- BootDQN+ not matching claimed performance HOT 17
- Cartpole environment observation parameters HOT 1
- `Catch._observation` does not follow the other environments with `_get_observation`
- How to add the results to results.py? What's the results format should be? HOT 1
- Tensorflow BOOT DQN agent loses performance after first iteration
- Incompatible with numpy>0.24
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bsuite.