- Project description
- Goal
- Dependencies
- How to start
- Result
In this project, I used the environment from Unity-ML. The agent is trained to navigate (and collect bananas) in a large, square world.
A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collection a blue banana. The goal is to collect as many as yellow bananas as possible while avoiding blue bananas.
The state has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around the agent's forward direction. The agent has to learn how to best select actions. Four actions are available, corresponding to:
- 0 - move forward
- 1 - move backward
- 2 - turn left
- 3 - turn right
The agent must get an average score of +13 over 100 consecutive episodes.
- Python 3.6
- Numpy ("pip install numpy")
- PyTorch
- Unity-ML agents
- Clone this repo
- In this notebook, the environment is imported by running cell No.2. To run this project locally, one must build their own environment. Below is the link to create local environment:
- Execute each cells in this notebook. The average score per 100 episodes will be shown after agent training is completed.
I implemented 3 fully connected layers, fc1: 64, fc2: 32, fc3: 16 and the environment was solved in 403 episodes