Giter Club home page Giter Club logo

eeg_motor_imagery_decoding's Introduction

Decoding Motor Imagery conditions from EEG data using a Convolutional Neural Network

This repository provides Python code for the decoding of different motor imagery conditions from raw EEG data, using a Convolutional Neural Network (CNN).

Environment setup

To run the code, create and activate a dedicated Anaconda environment by typing the following into your terminal:

curl -O https://raw.githubusercontent.com/gifale95/eeg_motor_imagery_decoding/main/environment.yml
conda env create -f environment.yml
conda activate dnn_bci

EEG motor imagery datasets

Here two publicly available EEG BCI datasets are decoded: 5F and HaLT. For the decoding analysis, the 19-EEG-channels signal is standardized, downsampled to 100Hz, and each trial is epoched in the range [-250ms 1000ms] relative to onset. The data along with the accompanying paper can be found at (Kaya et al., 2018).

5F dataset

This is a motor imagery dataset of the 5 hand fingers movement: thumb, index finger, middle finger, ring finger, pinkie finger. The following files are used for the analyses:

  1. 5F-SubjectA-160405-5St-SGLHand.mat
  2. 5F-SubjectB-160316-5St-SGLHand.mat
  3. 5F-SubjectC-160429-5St-SGLHand-HFREQ.mat
  4. 5F-SubjectE-160415-5St-SGLHand-HFREQ.mat
  5. 5F-SubjectF-160210-5St-SGLHand-HFREQ.mat
  6. 5F-SubjectG-160413-5St-SGLHand-HFREQ.mat
  7. 5F-SubjectI-160719-5St-SGLHand-HFREQ.mat

To run the code, add the data files to the directory /project_dir/datasets/5f/data/.

HaLT dataset

This is a dataset consisting of 6 motor imagery conditions: left hand, right hand, left foot/leg, right foot/leg, tongue, passive/neutral state. The following files are used for the analyses:

  1. HaLTSubjectA1602236StLRHandLegTongue.mat
  2. HaLTSubjectB1602186StLRHandLegTongue.mat
  3. HaLTSubjectC1602246StLRHandLegTongue.mat
  4. HaLTSubjectE1602196StLRHandLegTongue.mat
  5. HaLTSubjectF1602026StLRHandLegTongue.mat
  6. HaLTSubjectG1603016StLRHandLegTongue.mat
  7. HaLTSubjectI1606096StLRHandLegTongue.mat
  8. HaLTSubjectJ1611216StLRHandLegTongue.mat
  9. HaLTSubjectK1610276StLRHandLegTongue.mat
  10. HaLTSubjectL1611166StLRHandLegTongue.mat
  11. HaLTSubjectM1611086StLRHandLegTongue.mat

To run the code, add the data files to the directory /project_dir/datasets/halt/data/.

CNN model

The decoding analysis is performed using the shallow ConvNet architecture described in Schirrmeister et al., 2018.

Cropped trials

This is analogous to a data augmentation technique: instead of full trials, the CNN is fed with crops (across time) of the original trials. This procedure results in more training data, and has been shown to increase decoding accuracy. More information about cropped trials decoding in Schirrmeister et al., 2018, and a tutorial for the Python implementation of the method can be found on the Braindecode website.

Inter-subject learning

Inter-subject learning is a zero-shot learning approach which aims at understanding how well a CNN trained on decoding the motor imagery trials of a set of subjects is capable of generalizing its decoding performance on a held-out subject. In other words, this is testing the possibility of pre-trained EEG BCI devices which readily work on novel subjects without the need of any training data from these subjects.

Model training and results

The CNN models have been trained using the following parameters:

  • Learning rate: 0.001
  • Weight decay: 0.01
  • Batch size: 128
  • Training epochs: 500

Results are shown for the training epochs which yielded highest decoding accuracies:

5F dataset results table HaLT dataset results table

eeg_motor_imagery_decoding's People

Contributors

gifale95 avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

eeg_motor_imagery_decoding's Issues

Validation Procedure

Hi

Are the reported results based a single split for train/validation/test or a k-fold cross validation?

SubjectA HFreq

Hi

Thanks for the wonderful work. Just wondering if you tried your code on Subject A HFREQ data (1000 Hz sampling).
Is there a reason it was not selected? I tried to work on this file using my model and for some reason it doesn't train at all (chance level). Just wondering if you faced the same issue.

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.