Giter Club home page Giter Club logo

algonauts2021_devkit's Introduction

Algonauts2021-devkit

Update: To make participation simpler, we have created a Google Colab where you can prepare challenge submission online.

This repository provides the code to predict human fMRI responses for videos from features of a baseline deep neural network (DNN) model (AlexNet). We further provide scripts to prepare the submission for the Algonauts 2021 challenge. The terms features and activations are used interchangeably in the code.

IMPORTANT: The Alexnet example is provided to explain how to prepare submissions using a baseline model. Please prepare your results in same format to participate in the challenge.!

Setup

  • Install anaconda
  • Clone the repository git clone https://github.com/Neural-Dynamics-of-Visual-Cognition-FUB/Algonauts2021_devkit.git
  • Change working directory cd Algonauts2021_devkit
  • Run conda create -n Algonauts2021 python=3.7 anaconda to setup a new conda environment with required libraries
  • Activate environment conda activate Algonauts2021
  • Install nilearn , pytorch , decord and opencv
  • Download the data here (if not already downloaded)and unzip in the working directory. Data is organized in two directories
    • AlgonautsVideos268_All_30fpsmax : contains 1102 videos: training (first 1000) and test (last 102) videos.
    • participants_data_v2021 : contains fMRI responses to training videos for both the challenge tracks.

Step 1: Extract Alexnet features for the videos

  • Run python feature_extraction/generate_features_alexnet.py to extract pretrained AlexNet features from the videos for all layers of AlexNet.
  • With the default arguments, the script expects a directory ./AlgonautsVideos268_All_30fpsmax/ with the video sequences and saves the features in a directory called ./alexnet
  • This code saves Alexnet features for every frame of every video, as well as a PCA transformation of these features to get top-100 components. These activations are split in train and test data
Arguments:
  • -vdir --video_data_dir: Directory where the downloaded video sequences are stored (eg. ./AlgonautsVideos268_All_30fpsmax/)
  • -sdir --save_dir: Directory where the exctracted features should be saved

Step 2: Predict fMRI responses

  • Run python perform_encoding.py to create predicted fMRI responses for test videos based on AlexNet features or custom Neural Network Layers
  • With the default arguments, the script expects a directory ./participants_data_v2021 with real fMRI data and ./alexnet/ with extracted NN features. It will generate predicted features using Alexnet (--model) layer_5 (--layer) for EBA (--roi) of subject 4 (--sub) in validation mode (--mode). The results will be stored in a directory called ./results. Running the script in default mode should return a mean correlation of 0.219
  • Example command for whole brain data (will take several minutes): python perform_encoding.py -rd ./results -ad ./alexnet/ -model alexnet_devkit -l layer_5 -sub sub01 -r WB -m val -fd ./participants_data_v2021 -v True -b 1000
Arguments:
  • -rd --result_dir: Result directory where the predicted fMRI activity will be saved
  • -ad --activation_dir: Features directory, this directory should contain the DNN features for training the linear regression and predicting test fMRI data (eg. ./alexnet after running Step 1)
  • -model --model: Specify the model name, under which the results will be stored
  • -l --layer: Specify the Neural Network layer to fit a linear mapping between activations and fMRI responses on training videos and predict test fMRI responses. For alexnet this should be layer_X with X between 1 and 8
  • -sub --sub: Select the subject from which the fMRI data should be used to train (and validate) the linear Regression, for the fMRI data this should be subXXwith XX in (01, 02, 03, 04, 05, 06, 07, 08, 09, 10)
  • -r --roi: Specify the region of interest (e.g. V1, LOC) from which fMRI data should be used; --roi WB uses the data from the Whole Brain
  • -m --mode: Specify in which mode the program should run: "val": 10% of the original training data will be used as validation data. If in validation mode a mean correlation between the real fMRI response and the predicted fMRI response is also calculated; "test": All training data will be used for training
  • -fd --fmri_dir: Directory which contains all recorded fMRI activity
  • -v --visualize: Visualize correlations in the whole brain (True or False), only available if -roi WB
  • -b --batch_size: Set the number of voxels to fit in one iteration, default is 1000, reduce in case of memory constraints
  • Note: Predicted results should be generated for all combinations of ROIs and subjects. We have given an example file to generate results for all ROIs and all subjects in a given track. Please run python generate_all_results.py to generate predicted results using default model and layer.
    • For mini_track please run python generate_all_results.py -t mini_track
    • For full_track please run python generate_all_results.py -t full_track

Step 3: Prepare Submission

  • After generating predicted fMRI activity for all the ROIs and all the subjects in a given track (mini_track and/or full_track) using generate_all_results.py, run python prepare_submission.py in order to prepare the results for submission. All results will be combined into a single file.
    • For mini_track please run python prepare_submission.py -t mini_track
    • For full_track please run python prepare_submission.py -t full_track
  • With the default arguments, the script expects the results from step 2 in a directory ./results/alexnet_devkit/layer_5. It prepares the submission for all 9 ROIs (mini_track) . To generate results for full_track change the arguments as mentioned above.
  • The script creates a Pickle and a zip file (containing the Pickle file) for the corresponding track that can then be submitted for participation in the challenge.
  • Submit the mini_track results here and full_track results here
Arguments:
  • -rd --result_dir: Directory containing the predicted fMRI activity from step 2, should be identical to result_dir there.
  • -t --track: mini_track for the specific ROIs, full_track for whole brain (WB) data. Submission can be done for either one of them separately, for submitting both the submission script should be run twice, once with mini_track and once with full_track

Cite

If you use our code, partly or as is, please cite the paper below

@misc{cichy2021algonauts,
      title={The Algonauts Project 2021 Challenge: How the Human Brain Makes Sense of a World in Motion}, 
      author={R. M. Cichy and K. Dwivedi and B. Lahner and A. Lascelles and P. Iamshchinina and M. Graumann and A. Andonian and N. A. R. Murty and K. Kay and G. Roig and A. Oliva},
      year={2021},
      eprint={2104.13714},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

algonauts2021_devkit's People

Contributors

kshitijd20 avatar xwl-xwl avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.