In this project we train an agent to navigate (and collect bananas!) in a large, square world.
The following project tree shows where to find code, documentation, etc.
.
├── README.md ... this README
├── environments
│ └── ... your Unity environment goes here
├── models
│ └── drlnd_p1_model.pth ... the serialized (trained) model weights
├── notebooks
│ ├── Navigation.ipynb ... the entry point where you can train and/or test the agent
│ ├── Report.ipynb ... the project report
│ └── scores.png ... saved plot (shows average rewards of episodes)
├── python
│ └── ... contains and defines project depedencies (mostly borrowed from https://github.com/udacity/deep-reinforcement-learning/tree/master/p1_navigation)
├── src
│ ├── agents.py ... contains the DoubleDDQN agent implementation
│ ├── environments.py ... contains wrapper for the Unity env
│ ├── experiences.py ... contains replay buffers
│ └── models.py ... contains neural network implementations
A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of the agent is to collect as many yellow bananas as possible while avoiding blue bananas.
The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:
0
- move forward.1
- move backward.2
- turn left.3
- turn right.
The task is episodic, and in order to solve the environment, your agent must get an average score of +13 over 100 consecutive episodes.
-
Download the environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.
(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.
-
Place the file in the
environments/
folder, and unzip (or decompress) the file. -
[Optional] Create a Conda environment and activate it
(base) ➜ drlnd-p1 git:(master) ✗ conda create --name drlnd-p1 python=3.6
(base) ➜ drlnd-p1 git:(master) ✗ conda activate drlnd-p1
-
Change into the
python
folder and executepip install .
to install the required dependencies. -
Create a custom IPython kernel by executing
$ python -m ipykernel install --user --name drlnd --display-name "drlnd"
Start a jupyter notebook
from within the project folder and follow the instructions in notebooks/Navigation.ipynb
to either
- train your own agent or
- load the model weights and watch the pre-trained agent
HINT: make sure to switch from the default Python 3 kernel to "drlnd" (see section Project Setup).
Tested on macOS Big Sur (Version 11.0.1) and Ubuntu 20.04.2 LTS.