Giter Club home page Giter Club logo

maximum-entropy-deep-irl's Introduction

Maximum Entropy Deep Inverse Reinforcement Learning

The purpose of this repository is to get a taste of Inverse Reinforcement Learning. To replicate an expert's behavior in reward-free environments, a reward function should be recreated. Hence, for the implemented "Gridworld" and "ObjectWorld" environments, rewards functions were computed using Maximum entropy for a linear approximator and Deep Maximum Entropy for a complex, non-linear reward function.

Requirements

  • PyTorch

Contents

  • GridWorld Env
  • ObjectWorld Env
  • Maximum Entropy [1]
  • Deep Maximum Entropy [2]

Experiments

GridWorld Env

This environment is a rectangular grid. The cells of the grid correspond to the states of the environment. At each cell, four actions are possible: north, south, east, and west, which deterministically cause the agent to move one cell in the respective direction on the grid. Action would take the agent off the grid and leave its location unchanged. [source]

real-rewards

MaxEnt

ME.mp4

Deep MaxEnt

DME.mp4

ObjectWorld Env

The objectworld is an N×N grid of states with five actions per state, corresponding to steps in each direction and staying in place. Each action has a 30% chance of moving in a different random direction. Randomly placed objects populate the objectworld, and each is assigned one of C inner and outer colors. Object placement is randomized in the transfer environments, while N and C remain the same. There are 2C continuous features, each giving the Euclidean distance to the nearest object with a specific inner or outer color. In the discrete feature case, there are 2CN binary features, each one an indicator for a corresponding continuous feature being less than d ∈ {1, ...,N}. The true reward is positive in states that are both within 3 cells of outer color 1 and 2 cells of outer color 2, negative within 3 cells of outer color 1, and zero otherwise. Inner colors and all other outer colors are distractors. [3]

MaxEnt

ow-real-me

OW-ME.mp4

Deep MaxEnt

OW-real

OW-DME.mp4

References

  1. Thanh, H. V., An, L. T. H. & Chien, B. D. Maximum Entropy Inverse Reinforcement Learning. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) vol. 9622 661–670 (2016).
  2. Wulfmeier, M., Ondruska, P. & Posner, I. Maximum Entropy Deep Inverse Reinforcement Learning. (2015). arxiv
  3. Levine, S., Popović, Z. & Koltun, V. Nonlinear inverse reinforcement learning with Gaussian processes. Adv. Neural Inf. Process. Syst. 24 25th Annu. Conf. Neural Inf. Process. Syst. 2011, NIPS 2011 1–9 (2011).

maximum-entropy-deep-irl's People

Contributors

troddenspade avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.