Giter Club home page Giter Club logo

step's Introduction

This is the official implementation of the paper STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits. Please use the following citation if you find our work uesful:

@inproceedings{bhattacharya2020step,
author = {Bhattacharya, Uttaran and Mittal, Trisha and Chandra, Rohan and Randhavane, Tanmay and Bera, Aniket and Manocha, Dinesh},
title = {STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits},
year = {2020},
publisher = {AAAI Press},
booktitle = {Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence},
pages = {1342–1350},
numpages = {9},
series = {AAAI’20}
}

We have also released the Emotion-Gait dataset with this code, which is available for download here: https://go.umd.edu/emotion-gait.

  1. generator_cvae is the generator.

  2. classifier_stgcn_real_only is the baseline classifier using only the real 342 gaits.

  3. classifier_stgcn_real_and_synth is the baseline classifier using both real 342 and N synthetic gaits.

  4. clasifier_hybrid is the hybrid classifier using both deep and physiologically-motivated features.

  5. compute_aff_features consists of the set of scripts to compute the affective features from 16-joint pose sequences. Calling main.py with the correct data path computes the features, and save them in the affectiveFeatures<f_type>.h5 file, where f_type is the desired type of features:

    • '' original data (default)
    • 4DCVAEGCN data generated by the CVAE.

step's People

Contributors

uttaranb127 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

step's Issues

Why you have 6 labels in the dataset?

Hi,

Thank you very much for sharing your code.

I am wondering why you have 6 labels in the dataset but in the paper 4 emotions are used.

Thanks!
Xingchen

Neighbor link in graph.py

Dear Uttaran, I want to understand, how have you created a neighbor_link (in graph.py sctript) for you dataset.
Can you explain me your idea on openpose example please?

num_node = 18
self_link = [(i, i) for i in range(self.num_node)]
neighbor_link = [(4, 3), (3, 2), (7, 6), (6, 5), (13, 12), (12, 11),
                 (10, 9), (9, 8), (11, 5), (8, 2), (5, 1), (2, 1),
                 (0, 1), (15, 0), (14, 0), (17, 15), (16, 14)]
edge = self_link + neighbor_link
center = 1

openpose

Affective Features code

Hello,

I read this paper recently and wanted to try my hands on it. Thank you for providing the code for the same.

How to obtain the affective features of a custom dataset?

Regards
Abhishek

Application of these codes

Hello!
Thanks for your working very much!!!

I'm sorry this questions can be annoying.

I wanted to cite your work in my project and meet some problems. Since I have no previous experience with machine learning and python, I don't know how to use this code to predict the corresponding emotion based on a video of a person walking.
Could you please tell me:

  1. Whether these codes need to be run based on extracted gait data instead of gait video.

  2. If I want to input a gait video to get his emotions, such as happy and angry. What more needs to be done.

Excuse me. I need your help regarding data partitioning

Hi, Uttaran
I have some question for your paper and code.

  1. In your paper, training set:validation set:test set=7:2:1
    But actually, in the open code, it is Training:Test = 9:1. which one should I use?

  2. Your paper describes that the addition of 29-dim affective features will give a great improvement in the classification accuracy, but when I run your code, the experimental results are instead reduced with a great overfitting, what is the reason for this?

Thank you! : )

Questions about the results

Hi,

Thank you for your work and code!

I have a question on Fig. 6 of your paper.

In the caption, you said you were showing the results over the 3177 gaits. But in the figure, the total number of 4 classes (Angry, 1073, Happy 565, Sad 65 and Neutral 232) are 1935 not 3177. In adition, I think you trained the model on E-gaits dataset. In this case, are you reporting results on all the E-gaits not only on the test set? Because I think your test set does not have so many samples.

Finally, as shown in Fig.6, your sample classes are highly imbalanced. I am wondering if your training set is also imbalanced, and in this case, how much sense the accuracy can be used as the evaluation metric.

Sorry for so many questions. I am not sure if I misunderstood something. Could you please kindly help to clarify?

Many thanks!
Xingchen

Question regarding the data for training and testing

Hello,

Thank you for your incredible work. I have a question regarding the data provided for training (code).

  1. From the paper I understand that for emotion classification, the gait + affective features are used for classifying? But from the code it seems that just the affective features are split into data_training and data_testing or am I mistaken?

Thanks

Excuse me, a question about dataset.

Hello,Uttaran!
I was wondering if the full dataset of 4227 samples is available now?
If not, when will it be announced?
Im looking forward to your reply

Some questions about the dataset

Hello sir, I have found that the scale of dataset depicted in the paper is different from the one in the website.
In the paper, there are 5227 = 4227(real) + 1000(synthetic). However, I get 6177=2177(real) + 4000(stnthetic) in the website. Please tell me the reason.

Some questions about the dataset

Hi,

Thanks for the code and dataset you have offered! I find myself really interested in your work!

I have some questions about the dataset as follows:

  1. I want to visualize the motion sequences in the dataset, but I find that the root joint (joint #0) is not at the origin of the coordinates, which is different from the ELMD dataset setting. (visualization in the first figure below) Also, when I try relocating the root joint to the origin, the scales and visual angles of the gaits vary a lot. (visualization in the second figure below) I wonder whether I need to do some preliminary works before visualizing the gaits.

  2. The synthetic gaits in the dataset seem a little strange. After performing visualization, it looks like the person is dragging his/her legs rather than walking.

  3. I wonder how the emotion labels are annotated. Are they annotated by pretrained models, or annotated manually?

Many thanks and best wishes!
Shuhong

没用

elmd

Excuse me.Some new questions about datasets.

Hello Uttaran,i have some questions about the datasets that you posted on GoogleCloud.
i found that the real gaits on GoogleCloud have 112+1048 angry samples、33+454 neutral samples、78+254 happy samples and 119+79 sad samples respectively from 342 were collected by you and the remaining 1,835 were taken from the ELMD.
Firstly,In the paper, there are 4227 real samples. However,the number of neutral/angry samples became less.Is there much difference between the datasets in parper and that of GoogleCloud?
Then,the input gaits were converted to 21-joint skeletons in the paper.but that of github code is 16.
Finally,may i ask when you can upload the latest datasets.
Looking forward to your reply

I can't run this code

There a lot of problems when I run the code. Can you provide information about your experimental environment? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.