Giter Club home page Giter Club logo

smie's Introduction

SMIE

This is an official PyTorch implementation of "Zero-shot Skeleton-based Action Recognition via Mutual Information Estimation and Maximization" in ACMMM 2023.

[Paper]

Framework

SMIE

Requirements

python = 3.7 torch = 1.11.0+cu113

Installation

# Install the python libraries
$ cd SMIE
$ pip install -r requirements.txt

Data Preparation

We apply the same dataset processing as AimCLR.
You can also download the skeleton data in BaiduYun link:

The code: pstl

Semantic Features

For the Semantic Features, You can download in BaiduYun link: Semantic Feature.

The code: smie

  • [dataset]_embeddings.npy: based on label names using Sentence-Bert.
  • [dataset]_clip_embeddings.npy: based on label names using CLIP.
  • [dataset]_des_embeddings.npy: based on label descriptions using Sentence-Bert.

Put the semantic feautures in fold: ./data/language/

Label Descriptions

Using ChatGPT to expand each action label name into a complete action description. The total label descriptions can be found in folder.

Different Experiment Settings

Our SMIE employs two experiment setting.

  • SynSE Experiment Setting: two datasets are used, split_5 and split_12 on NTU60, and split_10 and split_24 on NTU120. The visual feature extractor is Shift-GCN.
  • Optimized Experiment Setting: three datasets are used (NTU-60, NTU-120, PKU-MMD), and each dataset have three random splits. The visual feature extractor is classical ST-GCN to minimize the impact of the feature extractor and focus on the connection model.

SynSE Experiment Setting

To compared with the SOTA method SynSE, we first apply their zero-shot class splits for SynSE Experiment Setting. You can download the visual features from their repo, or download from our BaiduYun link: SOTA visual features. Code:smie.

Example for training and testing on NTU-60 split_5 data.

# SynSE Experiment Setting
$ python procedure.py with 'train_mode="sota"'

You can also choose different split id of config.py (sota compare part).

Optimized Experiment Setting

Seen and Unseen Classes Splits

For different class splits, you can change the split_id in split.py. Then run the split.py to obtain split data for different seen and unseen classes.

# class-split
$ python split.py

Acquire the Visual Features

Refer to Generate_Feature.

Training & Testing

Example for training and testing on NTU-60 split_1.
You can change some settings of config.py.

# Optimized Experiment Setting
$ python procedure.py with 'train_mode="main"'

Reference

If you find our paper and repo useful, please cite our paper. Thanks!

@inproceedings{zhou2023zero,
  title={Zero-shot Skeleton-based Action Recognition via Mutual Information Estimation and Maximization},
  author={Zhou, Yujie and Qiang, Wenwen and Rao, Anyi and Lin, Ning and Su, Bing and Wang, Jiaqi},
  booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
  pages={5302--5310},
  year={2023}
}

Acknowledgement

  • The codebase is from MS2L.
  • The visual feature is based on ST-GCN.
  • The semantic feature is based on Sentence-Bert.
  • The baseline methods are from SynSE.

Licence

This project is licensed under the terms of the MIT license.

Contact

For any questions, feel free to contact: [email protected]

smie's People

Contributors

yujieouo avatar

Stargazers

 avatar  avatar  avatar Beichen Zhang avatar Ziyu Liu avatar Zulkafil Abbas avatar  avatar Kenn avatar Zhenyu Tang avatar  avatar yichufan avatar Yunyao avatar  avatar 5wh avatar  avatar  avatar ztttttt avatar  avatar Zhao Yang avatar  avatar

Watchers

Kostas Georgiou avatar  avatar

smie's Issues

Alternative ways to download datasets

Hi,

Thanks for the great work!
I'm wondering if there is any other way to download datasets besides from Baidu? (e.g. Google Drive...)
Thanks in advance!

How to generate prompts via ChatGPT?

I am very interested in the descriptions in ntu120_des.txt. What prompts do you use via ChatGPT to generate them? Can you give an example? Looking forward to your sharing. Thanks.

Some problems with PKU-MMD dataset

Thanks for your nice work! I still have some problems with PKU-MMD dataset: 1) PKU-MMD has 2 parts. Which part did you use in the experiments? 2) Are the checkpoints (split_7.pt ~ split_9.pt) trained on PKU-MMD part1?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.