Giter Club home page Giter Club logo

nipreps / fmriprep Goto Github PK

View Code? Open in Web Editor NEW
607.0 31.0 285.0 58.21 MB

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.

Home Page: https://fmriprep.org

License: Apache License 2.0

Python 15.52% Shell 0.15% HTML 83.53% Dockerfile 0.29% TeX 0.51% Makefile 0.01%
fmri fmri-preprocessing brain-imaging neuroimaging bids image-processing

fmriprep's Introduction

NiPreps Python module

This repository was created to reserve the top-level nipreps module. This opens the door for various submodules to exist under the nipreps name.

fmriprep's People

Contributors

adelavega avatar anibalsolon avatar bbfrederick avatar bpinsard avatar chrisgorgo avatar craigmoodie avatar danlurie avatar dimitripapadopoulos avatar effigies avatar emdupre avatar erramuzpe avatar feilong avatar frontiers-qc-sops avatar hippocampusgirl avatar jdkent avatar jsmentch avatar kfinc avatar madisoth avatar markushs avatar mgxd avatar nirjacoby avatar oesteban avatar rciric avatar romainvala avatar rwblair avatar theoschaefer avatar tsalo avatar utooley avatar yarikoptic avatar zhifangy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fmriprep's Issues

Extracting anatomical workflow

  • Creating a new module fmriprep.workflows.anatomical
  • Within that new module, create a new function t1w_preprocessing that returns a nipype workflow corresponding to the current anatomical (T1) preprocessing. This function includes:
    • It is documented, mentioning the principal processing nodes in the workflow
    • It has (in/out)putnodes to be connected to the main workflow.
    • It has a corresponding test with a subsampled T1w image (@oesteban will provide with this one) that runs the workflow in CircleCI (#5)

The last step of this issue is removing the corresponding nodes from the original workflow and connecting this workflow in-place.

Can't generate graph image with docker image.

Graphviz may not be installed:

Traceback (most recent call last):
File "/usr/local/miniconda2/envs/crnenv/bin/fmriprep", line 9, in
load_entry_point('fmriprep', 'console_scripts', 'fmriprep')()
File "/root/src/preprocessing-workflow/fmriprep/run_workflow.py", line 162, in main
workflow.write_graph()
File "/usr/local/miniconda2/envs/crnenv/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 444, in write_graph
format_dot(dotfilename, format=format)
File "/usr/local/miniconda2/envs/crnenv/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 1044, in format_dot
raise IOError("Cannot draw directed graph; executable 'dot' is unavailable")
IOError: Cannot draw directed graph; executable 'dot' is unavailable

Revisit EPI preprocessing workflow

Should be closed with #79

Implement the following procedures:

  • Coregister the corrected SBRef to the mean EPI: Current co-registration using MCFLIRT seems to be rather inaccurate.
  • Perform EPI unwarping before HMC (head-motion-correction)

Naming scheme of the outputs should follow BIDS Derivatives

See here for details: https://docs.google.com/document/d/1Wwc4A6Mow4ZPPszDIWfCUCRNstn7d_zzaWPcfcHmgI4/edit

EDIT (@oesteban): The final product of this issue comprehends a couple of new tests. One test will apply to the ds005-type workflow and the other to the ds054-type workflow. The test will hard-code the desired outputs and their names, and after running the one-subject smoke tests in CircleCI, we will check that all the prescribed outputs are there with the name fulfilling BIDS-Derivatives.

For instance, the prescribed outputs for ds005 are:

ds005-out/
   derivatives/
       sub-01/
            anat/
                sub-01_T1w_brainmask.nii.gz
                sub-01_T1w_space-MNI152lin_brainmask.nii.gz
                sub-01_T1w_inu.nii.gz
                sub-01_T1w_dtissue.nii.gz
                sub-01_T1w_space-MNI152lin.nii.gz
                sub-01_T1w_target-MNI152lin_affine.mat
                sub-01_T1w_space-MNI152lin_class-CSF_probtissue.nii.gz
                sub-01_T1w_space-MNI152lin_class-GM_probtissue.nii.gz
                sub-01_T1w_space-MNI152lin_class-WM_probtissue.nii.gz
            func/
                sub-01_task-mixedgamblestask_run-01_bold_confounds.tsv
                sub-01_task-mixedgamblestask_run-01_bold_target-T1w_affine.txt
                sub-01_task-mixedgamblestask_run-01_bold_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-01_bold_space-MNI152lin.nii.gz
                sub-01_task-mixedgamblestask_run-01_bold_space-MNI152lin_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-01_bold_space-T1w.nii.gz
                sub-01_task-mixedgamblestask_run-01_bold_T1w-target-meanBOLD_affine.txt
                sub-01_task-mixedgamblestask_run-01_bold_hmc.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_confounds.tsv
                sub-01_task-mixedgamblestask_run-02_bold_target-T1w_affine.txt
                sub-01_task-mixedgamblestask_run-02_bold_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_space-MNI152lin.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_space-MNI152lin_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_space-T1w.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_T1w-target-meanBOLD_affine.txt
                sub-01_task-mixedgamblestask_run-02_bold_hmc.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_confounds.tsv
                sub-01_task-mixedgamblestask_run-03_bold_target-T1w_affine.txt
                sub-01_task-mixedgamblestask_run-03_bold_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_space-MNI152lin.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_space-MNI152lin_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_space-T1w.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_T1w-target-meanBOLD_affine.txt
                sub-01_task-mixedgamblestask_run-03_bold_hmc.nii.gz

This issue includes generating the prescribed outputs for ds054 as well for the corresponding test.

BIDS reader function

Write a function that you provide with a subject id and returns a dict with the full bids information about that subject (not only files, but also parameters)

New output: EPI 4D volumes in MNI space (but using original resolution)

I had some discussions with NeuroSpin people and came to a conclusion that in addition to saving the unwarped and motion corrected 4D EPI volumes it would be also useful to save the normalized (transformed into MNI space) versions.

The main thing to keep in mind is that even though those outputs should be in the MNI space they should keep the resolution (voxel sizes) of the original EPI data (instead of the 2x2x2 or 1x1x1mm of the MNI template). This way we would not waste much space.

I think there should be a relatively easy way to apply transforms (the affine going from sbref to T1 and the nonlinear going from T1 to MNI) to the EPI data that would force the output resolution. It would be good if we could avoid creating an intermediate file in the MNI template resolution that would be then downsampled.

Change workflow name: se_pair_workflow

Use a more appropriate name.

The naming se_pair_workflow happens to be incorrect. Actually, in the AA database, the fieldmap is computed from 8 images (four pairs of SE acquisitions). We should think of a different way to name all of this because pair is particularly misleading: it is easily confounded with computing the fieldmap using a pair of GRE acquisitions (#20) and looking at the phase difference. Actually, the latter case is a lot more appropriate for the pair word, since it is always computed with two (you could use more, but I don't think anybody is doing that).

On the other case (our current se_pair_workflow) it is appropriate to have more than two images. Results improve significantly.

Generate fieldcoef image from fieldmap

Calculate the bspline coefficients image from the fieldmap image in the GRE-phasediff workflows (required by #20). There is a corresponding thread in FSL's mailing list.

The intended header is like:

sizeof_hdr      : 348
data_type       : 
db_name         : 
extents         : 0
session_error   : 0
regular         : r
dim_info        : 0
dim             : [ 3 46 53 35  1  1  1  1]
intent_p1       : 2.10000014305
intent_p2       : 2.10000014305
intent_p3       : 2.10000014305
intent_code     : <unknown code 2016>
datatype        : float32
bitpix          : 32
slice_start     : 0
pixdim          : [ 1.  2.  2.  2.  1.  0.  0.  0.]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 10
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : FSL5.0
aux_file        : 
qform_code      : scanner
sform_code      : scanner
quatern_b       : 0.0
quatern_c       : 0.0
quatern_d       : 0.0
qoffset_x       : 86.0
qoffset_y       : 100.0
qoffset_z       : 64.0
srow_x          : [  1.   0.   0.  86.]
srow_y          : [   0.    1.    0.  100.]
srow_z          : [  0.   0.   1.  64.]
intent_name     : 
magic           : n+1

When the field image is:

sizeof_hdr      : 348
data_type       : 
db_name         : 
extents         : 0
session_error   : 0
regular         : r
dim_info        : 0
dim             : [  3  86 100  64   1   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : <unknown code 2018>
datatype        : float32
bitpix          : 32
slice_start     : 0
pixdim          : [-1.          2.10000014  2.10000014  2.10000014  1.          0.          0.
  0.        ]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 10
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : FSL5.0
aux_file        : 
qform_code      : scanner
sform_code      : scanner
quatern_b       : 0.0
quatern_c       : 0.999048233032
quatern_d       : 0.0436193868518
qoffset_x       : 90.2999801636
qoffset_y       : -108.217712402
qoffset_z       : -57.9707260132
srow_x          : [ -2.10000014   0.           0.          90.29998016]
srow_y          : [   0.            2.09200907   -0.18302707 -108.2177124 ]
srow_z          : [  0.           0.18302707   2.09200907 -57.97072601]
intent_name     : 
magic           : n+1

Missing output nodes

Nodes to be hooked up for datasink.

epi skull striped mask
t1 skull strip
t1 transformations to mni (forward and inverse)
motion corrected unwarped epi in sbref space
affine sbref to t1 transformation
estimated motion parameters (a text file containing translations and rotations)
bias field corrected t1
segmentation of t1

Parcellation // Supplementary data

For now, there is only one data resource (a brain parcellation) that does not belong to the FSL package and is used along the workflow.

I assume (@craigmoodie can correct me if I'm wrong) that the user should have the possibility to change this file, but generally, the default one will be used.

@chrisfilo: what do you think about having a data/ or resources/ folder where we keep and distribute these files. Right now we would place here a parcellation that is publicly available through neurovault. We would need to have a LICENSE file in that folder, indicating the appropriate licensing for each of the data files distributed.

Otherwise, a systematic solution to these data requirements should be defined.

Once this decision is made, we would close this issue when @craigmoodie places this file where we have decided.

Extracting the fieldmap preparation workflows

  • Creating a new fmriprep.workflows.fieldmap module
  • Create a new function that returns a workflow to calculate the undistorted fieldmap from two SE fieldmap acquisitions
  • Create a new function that computes the distortion and the vsm (voxel shift map) from an input fieldmap. This is probably doable just importing one of the existing nipype workflows.

Making the workflow robust to differences in acquisition protocols

[i] Right now the preprocessing workflow can only handle inputs from workflows that follow the HCP scanning protocol. However, most studies and especially older studies, will have utilize a traditional double echo fieldmap approach. Hence, the workflow needs to be able to handle both approaches.

[ii] It would also be good to test different types of fieldmaps, such as GRE, SE and spiral fieldmaps.

[iii] Lastly, it would be good to build in a contingency for when a protocol has no fieldmaps, or when fieldmaps are missing for some subjects.

PDF Reports (HCP style)

PDF reports must include:

  • Skull-strippings (overlay of translucent mask over the designated image)
    • T1
    • SBRef
    • Fieldmap magnitude image
  • Image registration outputs:
    • EPI-to-SBRef (overlay of SBRef contours over EPI)
    • T1-to-SBRef (overlay of T1 segmentation contours over SBRef)
    • T1-to-MNI (overlay of T1 segmentation contours over MNI template)
  • Unwarping / Fieldmaps
    • T1 contours over the unwarped-EPI
    • SE Pairs only: contours of the magnitude image in the two PE directions over EPI
  • Head motion
    • Frame displacement plot

Make all imports absolute

@shoshber has started with this, and I see that the recommendation is using absolute imports always. I apologize, @rwblair, because at some point you started using them and I rolled all those changes back. I should have checked the recommendations first.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.