Giter Club home page Giter Club logo

dfp's Introduction

Reinforcement Learning with Goals

This repo hosts the code associated with my O'Reilly article, "Reinforcement Learning for Various, Complex Goals, Using TensorFlow," published on DATE.

This the code in this repository contains implementations of Deep Q-Network, and Learning to Act by Predicting the Future.

Requirements and installation

In order to run this notebook, you'll need to install:

There are two easy ways to install these libraries and their dependencies:

Option A: use the provided Dockerfile configured for this notebook

  1. Download and unzip this entire repo from GitHub, either interactively, or by entering

    git clone https://github.com/awjuliani/dfp.git
  2. Open your terminal and use cd to navigate into the top directory of the repo on your machine

  3. To build the Dockerfile, enter

    docker build -t dfp_dockerfile -f dockerfile .

    If you get a permissions error on running this command, you may need to run it with sudo:

    sudo docker build -t dfp_dockerfile -f dockerfile .
  4. Run Docker from the Dockerfile you've just built

    docker run -it -p 8888:8888 -p 6006:6006 dfp_dockerfile bash

    or

    sudo docker run -it -p 8888:8888 -p 6006:6006 dfp_dockerfile bash

    if you run into permission problems.

  5. Launch Jupyter and Tensorboard both by using tmux

    tmux
    
    jupyter notebook

    CTL+B, C to open a new tmux window, then

    cd './dfp'
    tensorboard --logdir=worker_0:'./train_0',...worker_n:'./train_n'
    

    Where n depends on number of workers used in async training.

    Once both jupyter and tensorboard are running, using your browser, navigate to the URLs shown in the terminal output (usually http://localhost:8888/ for Jupyter Notebook and http://localhost:6006/ for Tensorboard)

Option B: install Anaconda Python, TensorFlow, and other requirements

NumPy can be tricky to install manually, so we recommend using the managed Anaconda Python distribution, which includes NumPy, Matplotlib, and Jupyter in a single installation. The Docker-based method above is much easier, but if you have a compatible NVIDIA GPU, manual installation makes it possible to use GPU acceleration to speed up training.

  1. Follow the installation instructions for Anaconda Python. We recommend using Python 3.6.

  2. Follow the platform-specific TensorFlow installation instructions. Be sure to follow the "Installing with Anaconda" process, and create a Conda environment named tensorflow.

  3. If you aren't still inside your Conda TensorFlow environment, enter it by typing

    source activate tensorflow
  4. Install other requirements by entering

    pip install requirements.txt
  5. Download and unzip this entire repo from GitHub, either interactively, or by entering

    git clone https://github.com/awjuliani/dfp.git
  6. Use cd to navigate into the top directory of the repo on your machine

  7. Launch Jupyter and Tensorboard both by using tmux

    tmux
    
    jupyter notebook

    CTL+B, C to open a new tmux window, then

    cd './dfp'
    tensorboard --logdir=worker_0:'./train_0',...worker_n:'./train_n'
    

    Where n depends on number of workers used in async training.

    Once both jupyter and tensorboard are running, using your browser, navigate to the URLs shown in the terminal output (usually http://localhost:8888/ for Jupyter Notebook and http://localhost:6006/ for Tensorboard)

dfp's People

Contributors

awjuliani avatar jonbruner avatar mazecreator avatar mengguo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dfp's Issues

could you explain more about gridworld_rewards.py>?

In env.reset()

def reset(self):
    self.objects = []
    self.orientation = 0
    self.hero = gameOb(self.newPosition(0),1,[1,1,1],None,'hero')

    self.measurements = [0.0,1.0]

    self.objects.append(self.hero)
    for i in range(1):
        bug = gameOb(self.newPosition(0),1,[0,1,0],1,'goal')
        self.objects.append(bug)
    self.goal = bug
    state,s_big = self.renderEnv()
    self.state = state
    return state,s_big,self.measurements,[self.goal.x,self.goal.y],[self.hero.x,self.hero.y]

it returns state,s_big,self.measurements,[self.goal.x,self.goal.y],[self.hero.x,self.hero.y]

What are those and what is self.objects for??

Could you advice about making DFP algorithm be based on actor-critic(or DDPG, PPO) for continuous action space?

Hi, I'm a graduate student and want to say 'thank you' for explaining DFP in detail.

It is very interesting algorithm.
However, since I am majoring in robotics with learning approach, i want to make it work in continuous action space.
Thus, I've tried to make it be based on DDPG and Actor-Critic.
Unfortunately, It doesn't work.... Could you give me some advice about it, please?

Wonchul Kim

Possible Bug in "Work()"

First off, great work! Awesome concept and great visual of that concept it has really helped me.

I am not sure if it is my system setup with Python2, but I was getting "okay" results when running the training. It would run up to about 18 deliveries and then fall back a bit like it overfit. This didn't seem right to me so I started looking for an error (I had to update the ExperienceBuffer as it wouldn't work out of the box on my setup).

What I found was in the Work() of the DFP file this assignment:
s = s1
m = m1
g = g1

This appears to point s1 and s to the same memory location and the same for m & g. This was causing some problem with the self.g assignment as well (usually as it restarted a new run or when battery ran low).

I have updated that section of code to look like this:
s = np.copy(s1)
m = []
m = m1[:]
g = g1[:]

For some reason, "m" was an array and m1 was a list so I changed m to a list for my usage. Now my deliveries are pegged out around 28 and almost always runs the full 100 steps. (I just noticed I changed my offsets = [1,2,3,4,5,6,7,8,12,16,24,32] as I was debugging and didn't change back so your mileage may vary).

Thanks again for a great example of this powerfull RL approach.

Runtime warning followed by imageio error in final cell of Jupyter notebooks

Both notebooks run perfectly on my machine until the last cell, when I get a nonfatal runtime warning followed a few seconds later by a fatal error. Both the warning and the error are repeated for each thread:

/home/jbruner/anaconda3/lib/python3.6/site-packages/numpy/core/fromnumeric.py:2889: RuntimeWarning: Mean of empty slice.
  out=out, **kwargs)
/home/jbruner/anaconda3/lib/python3.6/site-packages/numpy/core/_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
/home/jbruner/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:110: DeprecationWarning: PyUnicode_AsEncodedObject() is deprecated; use PyUnicode_AsEncodedString() to encode from str to bytes or PyCodec_Encode() for generic encoding
/home/jbruner/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:112: DeprecationWarning: PyUnicode_AsEncodedObject() is deprecated; use PyUnicode_AsEncodedString() to encode from str to bytes or PyCodec_Encode() for generic encoding
/home/jbruner/anaconda3/lib/python3.6/site-packages/ipykernel/__main__.py:113: DeprecationWarning: PyUnicode_AsEncodedObject() is deprecated; use PyUnicode_AsEncodedString() to encode from str to bytes or PyCodec_Encode() for generic encoding
Exception in thread Thread-8:
Traceback (most recent call last):
  File "/home/jbruner/anaconda3/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 82, in get_exe
    auto=False)
  File "/home/jbruner/anaconda3/lib/python3.6/site-packages/imageio/core/fetching.py", line 102, in get_remote_file
    raise NeedDownloadError()
imageio.core.fetching.NeedDownloadError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/jbruner/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/home/jbruner/anaconda3/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "<ipython-input-6-c1ed32fc9e91>", line 37, in <lambda>
    worker_work = lambda: worker.work(sess,coord,saver,train)
  File "<ipython-input-4-269c92992ba4>", line 103, in work
    duration=len(self.images)*time_per_step,true_image=True)
  File "/home/jbruner/github/dfp/helper.py", line 36, in make_gif
    import moviepy.editor as mpy
  File "/home/jbruner/anaconda3/lib/python3.6/site-packages/moviepy/editor.py", line 22, in <module>
    from .video.io.VideoFileClip import VideoFileClip
  File "/home/jbruner/anaconda3/lib/python3.6/site-packages/moviepy/video/io/VideoFileClip.py", line 3, in <module>
    from moviepy.video.VideoClip import VideoClip
  File "/home/jbruner/anaconda3/lib/python3.6/site-packages/moviepy/video/VideoClip.py", line 20, in <module>
    from .io.ffmpeg_writer import ffmpeg_write_image, ffmpeg_write_video
  File "/home/jbruner/anaconda3/lib/python3.6/site-packages/moviepy/video/io/ffmpeg_writer.py", line 19, in <module>
    from moviepy.config import get_setting
  File "/home/jbruner/anaconda3/lib/python3.6/site-packages/moviepy/config.py", line 38, in <module>
    FFMPEG_BINARY = get_exe()
  File "/home/jbruner/anaconda3/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 86, in get_exe
    raise NeedDownloadError('Need ffmpeg exe. '
imageio.core.fetching.NeedDownloadError: Need ffmpeg exe. You can download it by calling:
  imageio.plugins.ffmpeg.download()

Readme and Dockerfile

You're welcome to borrow from this step-by-step readme from another one of our TensorFlow projects. It might seem pretty basic for programmers who are learning about neural networks, but we'd like to make the installation and configuration process as smooth as possible for the readers.

What actually is going on?

A quick run shows that the agent learns very fast to survive and delivers objects as needed. hooray!

But....if you take a closer look at the policy been used, (i.e., switch to go to battery whenever battery is below 0.3, otherwise go for delivery), note that no matter how good or bad the predication network is, the agent will survive. This policy does not use the predication outcome, but only the current measurement.

And I run the network under this policy the whole night, and the predication error does not decrease to zero (gradient norm around 1.5e3 and loss around 7.4). In other words, the "predicating the future" part is not working.

@awjuliani do you have any opinion on this?

Performance curves show very low number of deliveries and 0 losses

Hi Arthur,
Thank you for this article and code! I'm running the notebook on Windows, with Anaconda python 3.5. My performance curves look nothing like yours, showing zero losses and average of < 1 delivery per episode. I get two warnings on startup but no other errors. In the last cell of the notebook I see this output:
Starting worker 0
Starting worker 1
Starting worker 2
Starting worker 3
Starting worker 4
Starting worker 5
Starting worker 6
Starting worker 7
C:\Users\dp\AppData\Local\Continuum\Anaconda3\envs\tensor35\lib\site-packages\numpy\core\fromnumeric.py:2889: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
C:\Users\dp\AppData\Local\Continuum\Anaconda3\envs\tensor35\lib\site-packages\numpy\core_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)

image

image

any suggestions on what might be wrong and how to correct it would be much appreciated!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.