Giter Club home page Giter Club logo

video2command's Introduction

video2command

A PyTorch adapted implementation of the video-to-command model described in the paper:

"Translating Videos to Commands for Robotic Manipulation with Deep Recurrent Neural Networks" in ICRA 2018. Check the author's original implementation in Tensorflow.

Requirements

  • PyTorch (tested on 1.3)
  • TorchVision
  • numpy
  • PIL
  • coco-caption, a modified version is used to support Python3
  • OpenCV (optional, if you need to extract features on your own video)

Introduction

The video2command model is an Encoder-Decoder neural network that learns to generate a short sentence which can be used to command a robot to perform various manipulation tasks. The architecture of the network is listed below:

Compared to the architecture used in the original implementation, the implementation here takes more inspiration from the seq2seq architecture where we will inject the state of the video encoder directly into the command decoder instead. This promotes a 2~3% improvement in the BLEU 1-4 scores.

Experiment

To repeat the video2command experiment:

  1. Clone the repository.

  2. Download the IIT-V2C dataset, extract the dataset and setup the directory path as datasets/IIT-V2C.

  3. For CNN features, two options are provided:

    • Use the pre-extracted ResNet50 features provided by the original author.

    • Perform feature extraction yourself. Firstly run avi2frames.py under folder experiments/experiment_IIT-V2C to convert all videos into images. Download the *.pth weights for ResNet50 converted from Caffe. Run extract_features.py under folder experiments/experiment_IIT-V2C afterwards.

    • Note that the author's pre-extracted features seem to have a better quality and lead to a possible 1~2% higher metric scores.

  4. To begin training, run train_iit-v2c.py.

  5. For evaluation, firstly run evaluate_iit-v2c.py to generate predictions given all saved checkpoints. Run cocoeval_iit-v2c.py to calculate scores for the predictions.

Additional Note

If you find this repository useful, please give me a star. Please leave me an issue if you find any potential bugs inside the code.

References

Some references which help my implementation:

https://github.com/nqanh/video2command

https://github.com/ruotianluo/pytorch-resnet

video2command's People

Contributors

cjiang2 avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.