Giter Club home page Giter Club logo

neurodoro's Introduction

NEURODORO

babel

A brain-responsive Pomodoro timer for maximum productivity.

Overview

The goal behind Neurodoro is to create a fun, helpful application for EEG in everyday life. It is also an opportunity to practice applying machine learning library to EEG data.

If you've ever used the Pomodoro technique, you know that it can be helpful to avoid procrastination by breaking work up into manageable chunks. However, you might also appreciate how annoying it is when the technique forces you to stop working while you're still on task and 'in the zone'. What if we could make an app that would alter the length of a Pomodoro session, giving you more time if you're concentrated and less time if you're distracted? That's Neurodoro.

Neurodoro is still heavily in development. So far, we have built a simple Pomodoro Timer app with React Native that uses beta/theta band ratios from a Muse headband to update the amount of time remaining in the work session. However, this simple brainwave-based classifier doesn't perform as well as we'd like, which is why we've also included a cognitive test to create labeled attention performance data to train a deep neural network. By recording large datasets of user brain data labeled with the difficulty and performance scores from the cognitive test, we hope to be able to develop an algorithm that can determine whether the attentional and cognitive performance of a user is high or low based on a 2s epoch of their brainwaves.

Because we want this classifier to run locally on a smartphone with a continuous stream of data from the Muse, we are performing the majority of our ML development in TensorFlow, which can be exported to Android.

Collecting Data

If you have an Android and a Muse headband, one of the best ways to help the Neurodoro project is to take our cognitive test while wearing your Muse and help us build a dataset. Download the app from the Play store, select 'Collect Data', and follow the instructions. Make sure you're connected to Wifi, because your EEG data will be streamed to our database as you are taking the test.

Note: our cognitive test, which runs on the Phaser engine, is also open source. Find it here: https://github.com/jdpigeon/corvo

Contact

If you want to get involved with the Neurodoro project, get in touch with us on the NeuroTechX Slack or create an issue. You'll find our thoughts, discussions, and plans to work together in the #neurodoro channel.

Setup

  1. Install and setup React Native. Note: Neurodoro uses lots of native code, so create-react-native-app and Expo are not an option. Follow the instructions for "Building Apps with Native Code." You may also need to install the JDK, Node, Watchman, and the Gradle Daemon as well
  2. Install yarn
  3. Clone this repo git clone https://github.com/NeuroTechX/neurodoro.git
  4. run yarn install in the neurodoro folder
  5. Connect an Android device with USB debug mode enabled. Because the LibMuse library depends on an ARM architecture, emulators are not an option
  6. Run react-native start to start React packager
  7. In new terminal, run adb reverse tcp:8081 tcp:8081 to ensure debug server is connected to your device and then react-native run-android to install neurodoro

neurodoro's People

Contributors

farfan92 avatar jdpigeon avatar jharris1679 avatar lcoome avatar leonfrench avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neurodoro's Issues

Add beta/theta ratio as control metric for evaluating algorithm performance

As a sanity check, we should include a beta/theta ratio metric in our ML work. This should be relatively straightforward to do with the MNE library.

Details on the FFT and band means used in our app:

  • FFT input length, FFT length, and sampling rate: 256
  • Epoch length: 256 samples (1s)
  • Epoch interval: 250 ms (4times/s)
  • FFT buffer smoothed over last 4 epochs
  • Theta range: 4-8hz
  • Beta range: 13-30hz

what is your libmuse version ?

Thank you very much for this repo.
I only had libmuse_5.4.0 from my 2016 project that i now add the Emotiv Insight, Melon and maybe more

I found your libmuse_android.jar to be bigger in size and from 2018.
Please tell me what version yours is :-)
InteraXon stopped offering a sdk :-(

Greetings from Germany,
my does-them-all app: https://youtu.be/dSY7PYpg-c0

Rewrite timestamping code in Neurodoro to be sequential

We don't care about absolute timestamped values and the packets coming from the Muse have confusingly overlapped values. We're going to rewrite our data recording code as follows

  1. Collect start timestamp at beginning of session
  2. Each sample gets a sequence number
  3. SessionInfo contains start and end time as well as user-specific identifiers

Export first public dataset

In order to help people contribute, we should be able to easily share all our CORVO data collected through the app.

There's a lot of opportunity for over engineering. To start, let's try using GCPs export function combined with a python script to clean and organize the data

Crash and restart of main activity during pomodoro session

undefined is not a function (evaluating 'e.state.scoreBuffer.reduce(function(e,t){return e+t})')

This is the line that means the scoreBuffer, which is filled up with concentration scores as they come in. Could the score buffer be getting set to null?

Keras CNN Classifier

Neural network approach for best performance and potential integration into app.

Inception and VGG16 both do a good job classifying eyes open vs eyes closed EEG spectrograms. In order to make this work for Neurodoro data we'll need to

  1. Tweak CNN input so that it takes numerical data rather than images

  2. Figure out how to take a pretrained image classification CNN from classification to regression

  • rewrite final few layers?
  • train on examples with difficulty/performance labels?
  1. Train & tweak

Adapting the Dynamic Adaptive Online Assessment System

First step in using the Dynamic Adaptive Online Assesment System to collect data for Neurodoro's classifier training is to modify it so it can run in our app and lay down training info alongside the EEG data that is being collected

  • Modify task parameters to allow epochs of optimal performance to be identified
  • Remove sign in requirement
  • Use Window.postMessage() to send difficulty and performance data to the app after every trial. We can use Webview's onPostMessage() function to grab this data
  • QA for making sure test works well on phone screen (might need to bridge to React Native for button presses?)

TPOT Regression Classifier

Let's see how good a performance we can get on a traditional sci-kit algorith. Whatever we come up with could inform what we put into the app.

We've already gotten ~68% just doing focused vs relaxed samples. For this next push let's organize the problem as a regression with an X matrix containing all freq bins for each electrode and a Y vector containing cognitive test data.

Outstanding Q's:
To accurately train on something that will be reflected during Pomodoro work sessions, we'll want to use a smart metric from the cognitive test. It seems like periods in which performance drops after 'mental fatigue' sets in and difficulty starts decline are what we want to ID. In that case, the derivative of difficulty is a really good measure to have. Does performance capture that accurately? How might we normalize the performance metric to make it most responsive to mental fatigue?

NaNs for CORVO difficulty score

We often get NaNS in the difficulty score from CORVO.

I cover for NaNs in the app by converting them all to zeros, but they're obscuring some important data from us.

Restrict CORVO cognitive test to one test type?

CORVO (our cognitive test webapp) seems to be working well, but the fact that it switches to different types of tests partway through might throw off our classifier.

Based on feedback from participants, the last test with the numbers is especially easy and probably doesn't elicit the same degree of concentration.

I think we need to stick to one task for the first iteration of the classifier.

Reformat BigQuery data so that it maintains time series ordering

Our current data format and cloud functions aren't maintaining the order of our time series data.

Let's first see if timestamps attached to our data can be used to overcome this. If not, maybe we can either change each BQ row to a chunk of samples or redo our insert function to insure ordering

Add sessionInfo pubsub topic

We're breaking sessionInfo packets into a different pubsub topic. I've set it up in the app but we need to create the topic and give the neurodoro service account permission to it

Upgrade to React Native .50

Just tried to do this this morning and ran into some serious issues. Mostly, it seems with React Native Router Flux and it's dependencies (mobx-react).

When the time comes to upgrade to a new RN, it might also be a good time to refactor and switch from router flux to react-router

Fix Phaser layout so that CORVO test is centered

Currently, the CORVO test's layout is pretty messed up. It sights a little too far to the right on most phones and doesn't fill the screen nicely.

This isn't for a lack of effort. Ive spent about 8 hours trying to fix the test's responsive layout, but was unable to get it to work correctly. What we have now is the best I could do to get the test playable on most devices.

In my opinion, our tech stack has gotten us into some pretty tricky territory. Phaser wasn't really designed to run on mobile and, on top of that, we're not running in a typical mobile browser, but an Android WebView embedded in a native app. IIRC, the issue is that the browser height and width dimensions reported by the WebView to Phaser are completely mismatched from the phone's actual dimensions, even when controlling for pixel vs display pixel. However, I had no experience with Phaser before this project so maybe I'm missing something.

Prototype Inception-based classifier for eyes closed/open data

Following the Tensorflow for Poets tutorial, I've tested the ability of a retrained Inception network to identify open vs closed eye epochs of data.

The idea is to convert our EEG data into spectrograms (2s epochs) and then feeding these images (as jpgs to start with) to this classifer and teaching it distinguish between open vs closed eye spectrograms.

However, taking a look at the spectrograms we're getting from Neurodoro data, I want to reevaluate this step. There may be errors in the processing code in the jupyter notebook or even in the app's filtering code.

Add more instructions for the participants during the attention data collection

I got to try the data collection experiment in Neurodoro during the first NeuroTechTO hack night this year. And because I didn't exactly follow the expected procedure during the data collection, Dano suggested me to create an issue here about what other instructions could be given to the participants before they start their data collection.

I think the first thing is that it should say that the participants should try to be focused on the task, and be void of as much distraction as possible. This includes walking around, listening to other things etc. My concern is that the participants may not realize how much attention they actually need to pay to the task in order to complete it well before they click start, and they could be listening to music/talking with friends during the experiment even when they actually want to contribute good data.

And the other thing I was wondering when I was completing the task was, whether we should just rely on an instinct ("feel like...") to determine the answer, or we should actually try our best to think hard and strike for the closest answer as possible, especially for the harder (both pictures in the pair have a large amount of dots and the difference between the number is small) comparisons. Maybe I wasn't supposed to think about this question at all, but I did it because I supposed the two different processes would involve different brain mechanisms and I wasn't sure which one the experimenter was tapping at. After I talked to Dano I understood the second approach is correct. I am not sure if the other participants would have the same question as me, but if so I suggest giving an instruction to "try as hard as possible to get the correct answer in the allowed time".

I hope I did not include anything that's already there in the experiment instruction. And am I right that the main goal of the experiment is to put the participants into the state of being very focused on a task, and then collect their EEG data? If so, I wonder if this can be indicated directly to the participants as well before the start of the experiment, so that they have a guiding principle to determine the answers to any questions they may have.

Create simple SVM classifier in Tensorflow

So, we know that we're going to have to collect a bunch of data and go deep to make something really impressive, but for our debut in the app store we might as well include a 'decent enough' classifier so people can get some benefit.

The cross-validation accuracy of a SVM applied to denoised PSD data isn't terrible. Why don't we just do that?

  • Select best features (research says alpha, beta, and alpha/beta ratio). Also try some kind of mathematical feature selection?
  • Collect some higher quality training data with new version of app and cognitive testing screener
  • Tune hyperparameters and get decent accuracy in sci-kit learn
  • Implement the same SVM in Tensorflow
  • Port into app

UI Changes needed for tablet

Running the app on tablet looks a little weird. Here's some UI changes to make it look better

  • Change OK to Yes in go to timer Modal
  • Dramatically decrease disclaimer text size
  • Attention test needs to be recentered (could use a heavy overhaul, actually)

COROV difficulty scaling and performance metric

After playing around with a quite a bit of machine learning on datasets collected with the current CORVO implementation, we're concerned that there's some patterns in the performance metric that is hindering our ability to fit it to brain data.

  1. The difficulty metric doesn't appear to be scaling correctly. Currently, it has the tendency to increase shortly after the test begins, peak, and then decline down to 0. Thereafter, the test gets stuck at a low difficulty.

  2. The performance metric, while responsive to missed trials, also seems to have this trend to slowly decrease over time (perhaps as a function of the difficulty). It looked nice at first, as maybe an indicator of fatigue over time, but it shows up in nearly every dataset with almost exactly the same slope. Thus, we don't think it's coming directly from the user, but from the performance metric implementation.

Here's an example of what difficulty (blue) and performance (green) look like over a session:
corvodataexample

What we'd love to try is to turn off all scaling of difficulty and performance and see if that improves the accuracy of our regression algorithms. I agree there's something to be gained from having dynamic difficulty, but a constant high difficulty combined with a performance score that's always the same should minimize sources of variance in our labels.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.