Giter Club home page Giter Club logo

qmap's Introduction

Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural Networks

Being able to reach any desired location in the environment can be a valuable asset for an agent. Learning a policy to navigate between all pairs of states individually is often not feasible. An all-goals updating algorithm uses each transition to learn Q-values towards all goals simultaneously and off-policy. However the expensive numerous updates in parallel limited the approach to small tabular cases so far. To tackle this problem we propose to use convolutional network architectures to generate Q-values and updates for a large number of goals at once. We demonstrate the accuracy and generalization qualities of the proposed method on randomly generated mazes and Sokoban puzzles. In the case of on-screen goal coordinates the resulting mapping from frames to distance-maps directly informs the agent about which places are reachable and in how many steps. As an example of application we show that replacing the random actions in epsilon-greedy exploration by several actions towards feasible goals generates better exploratory trajectories on Montezuma's Revenge and Super Mario All-Stars games.

The paper can be found on arXiv while videos are available on the website.

Installation

First make sure you have TensorFlow, Baselines, Gym and Gym Retro installed. This code was written for versions 1.11.0, 0.1.5, 0.10.5 and 0.6.0 of these libraries.

To install this package, run:

git clone https://github.com/fabiopardo/qmap.git
cd qmap
pip install -e .

and copy the SuperMarioAllStars-Snes folder to the retro/data/stable directory where Gym Retro is installed.

Usage

First, go to the directory where you wish to save the results, for example:

cd ~/Desktop

By default the training scripts will create a qmap_results folder there.

To train the proposed agent on Super Mario Bros. (All-Stars) level 1.1 you can run:

python -m qmap.train_mario --render

Remove --render to avoid rendering the episodes (videos are saved in the result folder anyway). To train only DQN or Q-map use --no-qmap or --no-dqn. You can also disable both to get a pure random agent.

Similarly, to train the proposed agent on Montezuma's Revenge you can run:

python -m qmap.train_montezuma --render

Or, to learn Q-frames on the proposed grid world use:

python -m qmap.train_gridworld --render

Those scripts produce images, videos and CSV files in the result folder. To plot the values contained in the CSV files, run:

python -m qmap.utils.plot

PDF files are produced which can be kept open and refreshed every 10 seconds using for example:

watch -n10 python -m qmap.utils.plot

To filter which environments or agents to plot, use --witout or --only

To load an agent already trained, run for example:

python -m qmap.train_mario --load qmap_results/ENV/AGENT/RUN/tensorflow/step_STEP.ckpt --level 2.1

where ENV is the environment used to pre-train (for example on level 1.1) and AGENT, RUN and STEP have to be specified.

BibTeX

To cite this repository in publications please use:

@inproceedings{pardo2020scaling,
  title={Scaling All-Goals Updates in Reinforcement Learning Using Convolutional Neural Networks},
  author={Pardo, Fabio and Levdik, Vitaly and Kormushev, Petar},
  booktitle={Thirty-Fourth AAAI Conference on Artificial Intelligence},
  year={2020}
}

qmap's People

Contributors

fabiopardo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

qmap's Issues

ImportError: cannot import name 'ObservationInput'

Following readme.md
When I run

python -m qmap.train_mario --render

The result is:

Traceback (most recent call last):
  File "/home/mirror/anaconda3/envs/qmap/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/mirror/anaconda3/envs/qmap/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/mirror/Documents/GithubCode/qmap/qmap/train_mario.py", line 9, in <module>
    from qmap.agents.q_map_dqn_agent import Q_Map_DQN_Agent
  File "/home/mirror/Documents/GithubCode/qmap/qmap/agents/q_map_dqn_agent.py", line 3, in <module>
    from baselines.deepq.utils import ObservationInput
ImportError: cannot import name 'ObservationInput'

baselines version is:

(qmap) mirror@agent:~/Desktop$ pip list | grep base
baselines           0.1.5   

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.