Giter Club home page Giter Club logo

gimme_signals_action_recognition's Introduction

Gimme Signals

This repository contains the action recognition approach as presented in the Gimme Signals paper.

Gimme Signals Overview

  • A preprint can be found on arxiv.
  • A video abstract is available on youtube.

Video

Gimme Signal Video

In case the video does not play you can download it here

Citation

@inproceedings{Memmesheimer2020GSD, 
   author = {Memmesheimer, Raphael and Theisen, Nick and Paulus, Dietrich}, 
   title = {Gimme Signals: Discriminative signal encoding for multimodal activity recognition}, 
   year = {2020}, 
   booktitle = {2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
   address = {Las Vegas, NV, USA}, 
   publisher = {IEEE}, 
   doi = {10.1109/IROS45743.2020.9341699}, 
   isbn = {978-1-7281-6213-3}, 
 } 

Requirements

  • pytorch, torchvision, pytorch-lightning, hydra-core
  • pip intall -r requirements.txt

The following command installs supported torch and torhvision versions in case you get an CUDA Kernel issue:

pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html

Generate Representation

Example code to generate representations for the NTU dataset:

python generate_representation_ntu.py <ntu_skeleton_dir> $DATASET_FOLDER <split>

where split is either cross_subject, cross_setup, one_shot

Representations must be placed inside a $DATASET_FOLDER that an environment variable points to.

Precalculated representations

We provide precalculated representations for intermediate result reproduction:

Train

Example:

Simitate

python train.py dataset=simitate model_name=efficientnet learning_rate=0.1 net="efficientnet"

Exemplary, this command trains using the simitate dataset.

gimme_signals_action_recognition's People

Contributors

raphaelmemmesheimer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

gimme_signals_action_recognition's Issues

What data augmentation are used in this project

I have trained the efficientnet in recalculated representations of UTD-MHAD (Inertial and Skeleton) provided by the project and UTD-MHAD(fused Inertial and Skeleton using nb_generate.py). However, the training results are inconsistent with the paper.Then, I checked the code finding that the transform config is "minimal" which means no transform used .So could you tell me what's the transform used in the paper.And I also want to know the approach_id and attention to fuse Inertial and Skeleton using nbgenerate.npy.Thanks for your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.