Giter Club home page Giter Club logo

by571 / iqn-and-extensions Goto Github PK

View Code? Open in Web Editor NEW
71.0 4.0 14.0 3.45 MB

PyTorch Implementation of Implicit Quantile Networks (IQN) for Distributional Reinforcement Learning with additional extensions like PER, Noisy layer, N-step bootstrapping, Dueling architecture and parallel env support.

License: MIT License

Jupyter Notebook 70.02% Python 29.98%
iqn distributional-r rainbow reinforcement-learning-algorithms reinforcement-learning dqn pytorch-implementation implicit-quantile-networks noisy-layer n-step-bootstrapping

iqn-and-extensions's Introduction

DQN-Atari-Agents

Modularized training of different DQN Algorithms.

This repository contains several Add-ons to the base DQN Algorithm. All versions can be trained from one script and include the option to train from raw pixel or ram digit data. Recently added multiprocessing to run several environments in parallel for faster training.

Following DQN versions are included:

  • DDQN
  • Dueling DDQN

Both can be enhanced with Noisy layer, Per (Prioritized Experience Replay), Multistep Targets and be trained in a Categorical version (C51). Combining all these add-ons will lead to the state-of-the-art Algorithm of value-based methods called: Rainbow.

Planned Add-ons:

  • Parallel Environments for faster training (wall clock time) [X]
  • Munchausen RL [ ]
  • DRQN (recurrent DQN) [ ]
  • Soft-DQN [ ]
  • Curiosity Exploration [X] currently only for DQN

Train your Agent:

Dependencies

Trained and tested on:

Python 3.6 
PyTorch 1.4.0  
Numpy 1.15.2 
gym 0.10.11 

To train the base DDQN simply run python run_atari_dqn.py To train and modify your own Atari Agent the following inputs are optional:

example: python run_atari_dqn.py -env BreakoutNoFrameskip-v4 -agent dueling -u 1 -eps_frames 100000 -seed 42 -info Breakout_run1

  • agent: Specify which type of DQN agent you want to train, default is DQN - baseline! Following agent inputs are currently possible: dqn, dqn+per, noisy_dqn, noisy_dqn+per, dueling, dueling+per, noisy_dueling, noisy_dueling+per, c51, c51+per, noisy_c51, noisy_c51+per, duelingc51, duelingc51+per, noisy_duelingc51, noisy_duelingc51+per, rainbow
  • env: Name of the atari Environment, default = PongNoFrameskip-v4
  • frames: Number of frames to train, default = 5 mio
  • seed: Random seed to reproduce training runs, default = 1
  • bs: Batch size for updating the DQN, default = 32
  • layer_size: Size of the hidden layer, default=512
  • n_step: Number of steps for the multistep DQN Targets
  • eval_every, Evaluate every x frames, default = 50000
  • eval_runs, Number of evaluation runs, default = 5
  • m: Replay memory size, default = 1e5
  • lr: Learning rate, default = 0.00025
  • g: Discount factor gamma, default = 0.99
  • t: Soft update parameter tat, default = 1e-3
  • eps_frames: Linear annealed frames for Epsilon, default = 150000
  • min_eps: Epsilon greedy annealing crossing point. Fast annealing until this point, from there slowly to 0 until the last frame, default = 0.1
  • ic, --intrinsic_curiosity, Adding intrinsic curiosity to the extrinsic reward. 0 - only reward and no curiosity, 1 - reward and curiosity, 2 - only curiosity, default = 0.
  • info: Name of the training run.
  • fill_buffer: Adding samples to the replay buffer based on a random policy, before agent-env-interaction. Input numer of preadded frames to the buffer, default = 50000
  • save_model: Specify if the trained network shall be saved [1] or not [0], default is 1 - saved!
  • w, --worker: Number of parallel environments

Training Progress can be view with Tensorboard

Just run tensorboard --logdir=runs/

Atari Games Performance:

Pong:

Hyperparameters:

  • batch_size: 32
  • seed: 1
  • layer_size: 512
  • frames: 300000
  • lr: 1e-4
  • m: 10000
  • g: 0.99
  • t: 1e-3
  • eps_frames: 100000
  • min_eps: 0.01
  • fill_buffer: 10000

Pong

Convergence prove for the CartPole Environment

Since training for the Algorithms for Atari takes a lot of time I added a quick convergence prove for the CartPole-v0 environment. You can clearly see that Raibow outperformes the other two methods Dueling DQN and DDQN.

rainbow

To reproduce the results following hyperparameter where used:

  • batch_size: 32
  • seed: 1
  • layer_size: 512
  • frames: 30000
  • lr: 1e-3
  • m: 500000
  • g: 0.99
  • t: 1e-3
  • eps_frames: 1000
  • min_eps: 0.1
  • fill_buffer: 50000

Its interesting to see that the add-ons have a negative impact for the super simple CartPole environment. Still the Dueling DDQN version performs clearly better than the standard DDQN version.

dqn

dueling

Parallel Environments

To reduce wall clock time while training parallel environments are implemented. Following diagrams show the speed improvement for the two envrionments CartPole-v0 and LunarLander-v2. Tested with 1,2,4,6,8,10,16 worker. Each number of worker was tested over 3 seeds.

Convergence behavior for each worker number can be found: CartPole-v0 and LunarLander

Help and issues:

Im open for feedback, found bugs, improvements or anything. Just leave me a message or contact me.

Paper references:

Author

  • Sebastian Dittert

Feel free to use this code for your own projects or research. For citation:

@misc{DQN-Atari-Agents,
  author = {Dittert, Sebastian},
  title = {DQN-Atari-Agents:   Modularized PyTorch implementation of several DQN Agents, i.a. DDQN, Dueling DQN, Noisy DQN, C51, Rainbow and DRQN},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/BY571/DQN-Atari-Agents}},
}

iqn-and-extensions's People

Contributors

by571 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

iqn-and-extensions's Issues

Some questions on

Hello,Dittert
I'm sorry to disturb you .I wanna ask you some questions about your code.
1.what is the meaning of "Munchausen RL", I search the net but I can't find something about it. IF I just start a normal experment with noisy_iqn_per, shall I choose "if not self.munchausen:" ?
2.I can't open your text "IQN-DQN.ipynb",is something wrong with it?
Ich mag Deutschland so sehr . Aber mein Deutschniveau ist nicht gut. Vielen Dank!
Mit freundlichen Grüßen
Ihr Lin Yuan

BUG

Hello, Dittert
I'm here again,after several test, I found your code has such a memory leak that it will be killed by the system after running for only a few hours.
I test is by use 'BreakoutDeterministic-v4', 'SpaceInvadersDeterministic-v4', 'BreakoutNoFrameskip-v4'. All killed by the system.
In just two hours, it takes up 62g of virtual memory. I didn't make any changes to your code.

I checked it use memory profiler and found the 'writer' didn't close. So I add writer.close() after it used. Things seem to be getting better, but still not. I didn't see the problem. It seems to be action= agent.act (state,eps) and agent.step(state, action, reward, next_state, done, writer, step, G_x) in run.py

IQN-DQN.ipynb max over taus instead of max over actions?

Hello,

Given that forward() will return tuple:
return out.view(batch_size, num_tau, self.num_actions), taus

Should we use .max(1) instead of .max(2) ?
Currently it is:
Q_targets_next = Q_targets_next.detach().max(2)[0].unsqueeze(1) # (batch_size, 1, N)
Maybe should be:
Q_targets_next = Q_targets_next.detach().max(1)[0].unsqueeze(1) # (batch_size, 1, numActions)

In other words, to find the maximum in every tau group, rather than across every action?
Sorry if I misunderstood the process.

def learn(self, experiences):
        """Update value parameters using given batch of experience tuples.
        Params
        ======
            experiences (Tuple[torch.Tensor]): tuple of (s, a, r, s', done) tuples 
            gamma (float): discount factor
        """
        self.optimizer.zero_grad()
        states, actions, rewards, next_states, dones = experiences
        # Get max predicted Q values (for next states) from target model
        Q_targets_next, _ = self.qnetwork_target(next_states)
        Q_targets_next = Q_targets_next.detach().max(2)[0].unsqueeze(1) # (batch_size, 1, N)    <-------------------------------------- HERE
        
        # Compute Q targets for current states 
        Q_targets = rewards.unsqueeze(-1) + (self.GAMMA**self.n_step * Q_targets_next * (1. - dones.unsqueeze(-1)))
        # Get expected Q values from local model
        Q_expected, taus = self.qnetwork_local(states)
        Q_expected = Q_expected.gather(2, actions.unsqueeze(-1).expand(self.BATCH_SIZE, 8, 1))

        # Quantile Huber loss
        td_error = Q_targets - Q_expected
        assert td_error.shape == (self.BATCH_SIZE, 8, 8), "wrong td error shape"
        huber_l = calculate_huber_loss(td_error, 1.0)
        quantil_l = abs(taus -(td_error.detach() < 0).float()) * huber_l / 1.0
        
        loss = quantil_l.sum(dim=1).mean(dim=1) # , keepdim=True if per weights get multipl
        loss = loss.mean()


        # Minimize the loss
        loss.backward()
        #clip_grad_norm_(self.qnetwork_local.parameters(),1)
        self.optimizer.step()

        # ------------------- update target network ------------------- #
        self.soft_update(self.qnetwork_local, self.qnetwork_target)
        return loss.detach().cpu().numpy()            
        ```

Could you add multiple gpu support to iqn script?

Hello, Dittert,

I'm sorry to disturb you again. I'm very interested in your work. Could you tell me how to add multiple gpu support to this iqn script? I tried it( just like the following code), but it occurs some errors.

device = torch.device("cuda:2" if torch.cuda.is_available() else "cpu")
if torch.cuda.device_count() > 1:  # by thswind
    agent = torch.nn.DataParallel(agent, device_ids=[2,3,4])  # by thswind
agent.to(device)  # by thswind

I wish you you are willing to help me, I want to stand on the shoulder of you like stand on the shoulder of giant.

question about the difference between notebook version and script version

Hello, I'm sorry to disturb you.
I want to ask about the difference of "get_action" between notebook version and script version. Specifically, the "get_action" in notebook version is self.forward(inputs, self.K), but in script version (namely in calss IQN in modul.py), it is self.forward(inputs, self.N). And I can not find any use of self.K in agent.py.
From my perspective, I think maybe the version in the notebook version is true. Namely we should use K to get action. So I think the script version is wrong because it use N to get_action instead of K. (Although the paper says that IQN Is not sensitive to the value of K)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.