Giter Club home page Giter Club logo

sinnuswong / lightweight-transformer-models-for-har-on-mobile-devices Goto Github PK

View Code? Open in Web Editor NEW

This project forked from getalp/lightweight-transformer-models-for-har-on-mobile-devices

0.0 0.0 0.0 334.79 MB

Human Activity Recognition Transformer (HART) is a transformer based architecture that has been specifically adapted for IMU sensing devices. Findings shows that HART uses fewer paremeters, FLOPs and achieves state-of-the-art results.

License: MIT License

Python 54.42% Jupyter Notebook 20.20% PureBasic 25.38%

lightweight-transformer-models-for-har-on-mobile-devices's Introduction

Human Activity Recognition Transformer (HART) ❤️

Tensorflow implementation of HART/MobileHART:

Lightweight Transformers for Human Activity Recognition on Mobile Devices [Paper]

Transformer-based Models to Deal with Heterogeneous Environments in Human Activity Recognition [Paper]

Sannara Ek, François Portet, Philippe Lalanda

HART Architecture

If our project is helpful for your research, please consider citing :

@article{ek2023transformer,
  title={Transformer-based models to deal with heterogeneous environments in Human Activity Recognition},
  author={Ek, Sannara and Portet, Fran{\c{c}}ois and Lalanda, Philippe},
  journal={Personal and Ubiquitous Computing},
  pages={1--14},
  year={2023},
  publisher={Springer}
}

Table of Content

1. Updates

09/02/2023 Initial commit: Code of HART/MobileHART is released.

2. Installation

2.1 Dependencies

This code was implemented with Python 3.7, Tensorflow 2.10.1 and CUDA 11.2. Please refer to the official installation. If CUDA 11.2 has been properly installed :

pip install tensorflow==2.10.1

While only Tensorflow and Numpy are required to import model.py to your working environment

To run our training and evaluatioin pipeline, additional dependecies are needed. please launch the following command:

pip install -r requirements.txt

Our baseline experiments were conducted on a Debian GNU/Linux 10 (buster) machine with the following specs:

CPU : Intel(R) Xeon(R) CPU E5-2623 v4 @ 2.60GHz

GPU : Nvidia GeForce Titan Xp 12GB VRAM

Memory: 80GB

2.2 Data

We provide scripts to automate downloading and proprocessing the datasets used for this study. See scripts in dataset folders. e.g, for the UCI dataset, run DATA_UCI.py

For running the 'Combined' dataset training and evaluation pipeline, all datasets must first be downloaded and processed. Please run all scripts in the 'datasets' folder.

Tip: Manually downloading the datasets and placing them in the 'datasets/dataset' folder may be a good alternative for stabiltiy if the download pipeline keeps failing VIA the provided scripts.

UCI

https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones

MotionSense

https://github.com/mmalekzadeh/motion-sense/tree/master/data

HHAR

http://archive.ics.uci.edu/ml/datasets/Heterogeneity+Activity+Recognition

RealWorld

https://www.uni-mannheim.de/dws/research/projects/activity-recognition/#dataset_dailylog

SHL Preview

http://www.shl-dataset.org/download/

3. Quick Start

Due to constraints with Tensorflow, the model currently can only be trained on GPU and will not work when trained with CPU.

After running all or the desired datasets for usage in the DATA scripts in the dataset folder, launch either the jupyter notebook or python script to start the training pipeline.

Sample launch commands provided in the following sections

3.1 Importing HART or MobileHART to your project

HART and MobileHART are packaged as a keras sequential model.

To use our models with their default hyperparameters, please import and add the following code:

import model 

input_shape = (128,6) # The shape of your input data
activityCount = 6 # Number of classification heads

HART = model.HART(input_shape,activityCount)
MobileHART = model.mobileHART_XS(input_shape,activityCount)

Then compile and fit with desired optimizer and loss as conventional keras model.

3.2 Running HART in our provided pipeline

To run with the default hyperparameters of HART for MotionSense dataset, please launch:

python main.py --architecture HART --dataset MotionSense --localEpoch 200 --batch_size 64

Replace 'MotionSense' with one of the below to train on different a dataset

RealWorld, HHAR, UCI, SHL, MotionSense, COMBINED

3.3 Running MobileHART in our provided pipeline

To run with the default hyperparameters of MobileHART for MotionSense dataset

python main.py --architecture MobileHART --dataset MotionSense --localEpoch 200 --batch_size 64

Replace 'MotionSense' with one of the below to train on different a dataset

MotionSense,RealWorld,HHAR,UCI,SHL,MotionSense, COMBINED

4. Position-Wise and Device-Wise Evaluation

4.1 Position-Wise evaluation with the RealWorld Dataset

To run HART/MobileHART in a leave one position out pipeline with the RealWorld dataset, please launch:

python main.py --architecture HART --dataset RealWorld --positionDevice chest --localEpoch 200 --batch_size 64

Replace 'chest' with one of the below to test on different a position

chest, forearm, head, shin, thigh, upperarm, waist

4.2 Device-Wise evaluation with the HHAR Dataset

To run HART/MobileHART in a leave one device out pipeline with the HHAR dataset, please launch:

python main.py --architecture HART --dataset HHAR --positionDevice nexus4 --localEpoch 200 --batch_size 64

Replace 'nexus4' with one of the below to test on different a device

nexus4, lgwatch, s3, s3mini, gear, samsungold

5. Results

The table below shows the results obtained with HART and MobileHART on our training and evaluation pipeline with the 5 and combined datasets.

Architecture UCI MotionSense HHAR RealWorld SHL Combined
HART 94.49 98.20 97.36 94.88 79.49 85.61
MobileHART 97.20 98.45 98.19 95.22 81.36 86.74

The image below shows the result of our sensor-wise attention compared againts conventional attention:

HART Architecture

6. Acknowledgement

This work has been partially funded by Naval Group, and by MIAI@Grenoble Alpes (ANR-19-P3IA-0003). This work was also granted access to the HPC resources of IDRIS under the allocation 2022-AD011013233 made by GENCI.

lightweight-transformer-models-for-har-on-mobile-devices's People

Contributors

sannaraek avatar sinnuswong avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.