Giter Club home page Giter Club logo

obf's Introduction

Learn Oculomotor Behaviors from Scanpath

Feel free to cite the following article if you found this repo or research is helpful. Thanks!

Beibin Li, Nicholas Nuechterlein, Erin Barney, Claire Foster, Minah Kim, Monique Mahony, Adham Atyabi, Li Feng, Quan Wang, Pamela Ventola, Linda Shapiro, and Frederick Shic. 2021. Learning Oculomotor Behaviors from Scanpath. In Proceedings of the 2021 International Conference on Multimodal Interaction (ICMI '21), October 18–22, 2021, Montréal, QC, Canada. ACM, New York, NY, USA 9 Pages. https://doi.org/10.1145/3462244.3479923

@article{Li2021Oculomotor,
  title={Learning Oculomotor Behaviors from Scanpath},
  author={Li, Beibin and Nuechterlein, Nicholas and Barney, Erin and Foster, Claire and Kim, Minah and Mahony, Monique and Atyabi, Adham and Feng, Li and Wang, Quan and Ventola, Pamela and others},
  booktitle={Proceedings of the 2021 International Conference on Multimodal Interaction (ICMI '21), October 18--22, 2021, Montréal, QC, Canada},
  organization={ACM}  
  year={2021}
  url={https://doi.org/10.1145/3462244.3479923}
}

Oculomotor Behavior Framework (OBF)

Code Organization

OBF folder

All training code are inside the obf/ folder, where the models are stored in the obf/model/ae.py file, and

sampled data

We provided some sampled data and data pre-processing code to the sample_data/ folder.

Please download public eye-tracking datasets, and then use the provided Python scripts to process the signals.

Configs and JSON setting

We control the experiment setting with JSON files. Some sample configurations are in the config/ folder.

  • JSON setting
    • "pretrain data setting": defines where to load pre-training unlabelled data from disk.
      • "datasets": contains the path and signal length (in miliseconds) for input data.
      • "batch size": an integer to define mini-batch size.
    • "fixation identification setting": contains information for the I-VT algorithm
    • "experiment": for the pre-training process.

How to Use

Please check the Pretrain.ipynb Jupyter Notebook file to see examples on how to train a new model from scratch. Or, you can use the provided model directly for your downstream application, as shown below.

Downstream Applications

Please check the stim_prediction.ipynb Jupyter Notebook file to see a downstream application example. In this application, we use the pre-trained model, and fine-tune it for the MIT1003 dataset to predict which stimulus a subject was watching.

obf's People

Contributors

1idcorporation avatar beibinli avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.