Giter Club home page Giter Club logo

nistats's Introduction

Nistats is being retired.

This repository is now archived.

It will not receive any further development or bug fixes. Its functionality has now been incorporated into Nilearn's stats, datasets, and reporting modules.

It will be available in Nilearn 0.7.0 onwards, set to release in late 2020. Please file issues and pull requests in Nilearn, from now on.

Credit for the various Pull Requests that have been merged into Nistats are now visible in Nilearn. Open issues have been moved into Nilearn. Open PRs will need to be redone.

Nistats

Nistats is a Python module for fast and easy modeling and statistical analysis of functional Magnetic Resonance Imaging data.

It leverages the nilearn Python toolbox for neuroimaging data manipulation (data downloading, visualization, masking).

This work is made available by a community of people, amongst which the INRIA Parietal Project Team and D'esposito lab at Berkeley.

It is based on developments initiated in the nipy nipy project.

Important links

Dependencies

The required dependencies to use the software are:

  • Python >= 2.7
  • setuptools
  • Numpy >= 1.11
  • SciPy >= 0.17
  • Nibabel >= 2.0.2
  • Nilearn >= 0.4.0
  • Pandas >= 0.18.0
  • Sklearn >= 0.18.0

If you are using nilearn plotting functionalities or running the examples, matplotlib >= 1.5.1 is required.

If you want to run the tests, you need nose >= 1.2.1 and coverage >= 3.7.

If you want to download openneuro datasets Boto3 >= 1.2 is required

Install

The installation instructions are found on the webpage: https://nistats.github.io/

Development

Code

GIT

You can check the latest sources with the command:

git clone git://github.com/nistats/nistats

or if you have write privileges:

git clone [email protected]:nistats/nistats

nistats's People

Contributors

adelavega avatar agramfort avatar alpinho avatar bthirion avatar chrplr avatar dohmatob avatar effigies avatar gaelvaroquaux avatar gifuni avatar hcherkaoui avatar jdkent avatar jeromedockes avatar josephsmann avatar kamalakerdadi avatar kchawla-pi avatar lesteve avatar martinperez avatar mih avatar peerherholz avatar rschmaelzle avatar salma1601 avatar takhs91 avatar thechymera avatar thompsonj avatar tsalo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nistats's Issues

openfmri dataset agnostic example

I will create an example that downloads some openfmri dataset or takes a local one from a given directory and estimates the glm as in other examples

[BUG] strange estimation artifacts outside of the EPI

I think there is an artifact when we try to estimate a model on voxels outside of the EPI. Typically the cerebellum is in the brain mask but not in the EPI. I am not sure how SPM deals with this but we certainly should too.

Our case:
sub-01_task-languagelocalizer_language-string

SPM:
language_-_string_glass

[enhancement] Integrate frame_times into design matrix

one idea I add also would be to use the index of a DataFrame rather than using the frame_times (don't like the name much btw)

In [4]: df = pd.DataFrame(dict(reg1=np.random.randn(10), reg2=np.random.randn(10)))

In [5]: df
Out[5]:
reg1 reg2
0 0.435177 -1.034931
1 -0.059424 2.223872
2 0.070459 -0.838222
3 -0.777904 -0.690715
4 -0.782690 -0.045528
5 0.278254 0.388872
6 0.867945 0.242028
7 -2.214093 0.866546
8 -0.186639 -0.626605
9 -0.635547 1.209869

In [6]: tr = 2.4

In [7]: frame_times = np.arange(10) * 2.4

In [8]: df.index = frame_times

In [9]: df
Out[9]:
reg1 reg2
0.0 0.435177 -1.034931
2.4 -0.059424 2.223872
4.8 0.070459 -0.838222
7.2 -0.777904 -0.690715
9.6 -0.782690 -0.045528
12.0 0.278254 0.388872
14.4 0.867945 0.242028
16.8 -2.214093 0.866546
19.2 -0.186639 -0.626605
21.6 -0.635547 1.209869

In [10]: matrix = df.values

In [11]: matrix
Out[11]:
array([[ 0.43517668, -1.03493107],
[-0.05942358, 2.22387234],
[ 0.07045882, -0.83822237],
[-0.77790391, -0.69071475],
[-0.78269037, -0.04552835],
[ 0.27825373, 0.38887175],
[ 0.8679447 , 0.2420283 ],
[-2.21409337, 0.8665459 ],
[-0.18663918, -0.62660464],
[-0.63554651, 1.20986888]])

plot_spm_auditory typo

the example does not work because of wrong output_type names

eff_map = fmri_glm.compute_contrast(contrasts[contrast_id], 
    contrast_name=contrast_id, output_type='eff')

optimal design for given contrast

Would it be desirable to implement in compute contrast the option to compute accordingly the optimal design. Particularly when combining this with permutation testing. Considering Smith 2007.

beta/contrast maps for constant

Hello,
I am trying to write out a nifti-image for the contrast of the "constant" (i.e. the last regressor in the design matrix consisting of only ones). My code for the contrasts looks like this:

` z_map, t_map, eff_map, var_map = fmri_glm.transform(contrast_val, contrast_name=contrast_id, output_z=True, output_stat=True, output_effects=True, output_variance=True)

eff_map_image_path = path.join(target_dir, subjs[curr_sub] + '_' + contrast_id + '_eff_map_map.nii')
nib.save(eff_map, eff_map_image_path) #and same for t_map, eff_map, var_map etc`

Looping this over my sample, I produce a lot of contrasts and their respective z-,t-, and eff-images that all look pretty reasonable. But the constant doesn't. In particular, my assumption was that the eff_map would correspond to the 'beta' images, and that the beta-image for the constant would represent basically the mean across time (although different packages use weird scaling that I don't fully understand). However, instead of finding values around 100, 400, or even 2000 (depending on the raw values from the scanner), the "eff_map" images for the constant have positive and negative values. Seems I doing something very wrong here, but I couldn't find a solution by going through the code. So basically the question is: does the eff_map represent what I think of as beta (or the contrast - which in case of the constant should be identical)? Is there anything else to consider (in setting up the fmri_glm i am setting standardize to False)?

Perhaps I should say that my ultimate goal is to extract data from ROIs (for which I'd want the interesting effect and the constant image), which I'd do via nilearn (and also 2nd level modeling using the mass-univariate example therein).
Thanks, Ralf.

Parametric modulation - differences between packages

Hi,
we are contemplating to switch more and more to nistats and are thus checking for correspondence between nistats-results and existing methods (matlab, spm ...). So far the results have been very promising and I'm finding e.g. a correlation of >.96 between a SPM-ROI result across a sample of 20something participants and the nistats results.

However, I am running into trouble when looking at a parametric modulation. I am use a block-regressor for modeling some visual response, and then use a separate parametric modulator (in my testcase it is just an ascending z-transformed vector that ranges from 1:nstimuli - see examples of the design mat. Here the correlation between SPM and nistats goes down to only .65. - .7. Comparing the design matrices and regressors that get built shows that they are pretty much identical. It seems that the diverging results come in somewhere in how the model gets estimated (collinearity, orthogonalization, type of regression etc?). Before going down into rabbitholes here, I wanted to ask if anyone had a quick idea or (ideally) knew some command that would bring results into correspondence.
Thanks, Ralf.

Design matrix container

It would be useful to have a dictionary descriptor of parameters used to build the design matrix, to be able to reuse it in permutation test and reports

Mean scaling conventions and options

I am not sure about all the implications of mean scaling the signal by voxels or globally, but since in nistats we change the conventions of SPM, maybe it would be a good idea to open this as an option instead of forcing the user to follow a different convention? Also for matters of replicability of SPM results.

In glm.py in the documentation we have:
Y : array of shape(n_time_points, n_voxels)
then in line 54:
mean = Y.mean(0).

What do you think?

conda installer

not a priority, but would be nice, already nibabel and nilearn are accesible by adding the channel conda-forge to a miniconda distribution. I would propose to imitate them.

handle first to second level masking cases

I realized we have not assumed certain cases like not masking at the first level or certain assumptions about masking at first and then at second level. Since the first level object is passed to the second level object, certain things like using the second level mask on first level models could be implemented. I think we should discuss the details around this.

[BUG] slice_time ref should be optional


ValueError Traceback (most recent call last)
/usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in execfile(fname, *where)
202 else:
203 filename = fname
--> 204 builtin.execfile(filename, *where)

/home/parietal/bthirion/spatial_definition.py in ()
68 # GLM specification and fit
69 fmri_glm = FirstLevelModel(mask, standardize=False, noise_model='ols')
---> 70 fmri_glm.fit(func_filename, df)
71
72 # contrast

/home/parietal/bthirion/.local/lib/python2.7/site-packages/nistats-0.1.0.dev0-py2.7.egg/nistats/first_level_model.pyc in fit(self, run_imgs, paradigms, confounds, design_matrices)
333 ' to compute design from paradigm')
334 if self.slice_time_ref is None:
--> 335 raise ValueError('slice_time_ref not given to FirstLevelModel'
336 ' object to compute design from paradigm')
337 else:

ValueError: slice_time_ref not given to FirstLevelModel object to compute design from paradigm

The docstring say it defaults to 0.

API: paradigm should be dataframes

Looking at the code in experimental_paradigm.py:

  • paradigms objects are really data containers (and fairly simple ones)
  • the code in the module is for I/O of these to CSV files, and for checking their consistency
  • there are 2 types of paradigms, event and block
  • paradigms are are defined by a small number of named lists that have the same number of entries:
    • event: cond_id, onset, amplitude
    • block: cond_id, onset, duration and amplitude

All this makes me think that paradigm objects could be replaced by pandas dataframe with the corresponding entries, and a few functions, in particular something like a '_check_paradigm' for our own internal usage that would take a pandas dataframe and validated it as a block or event paradigm and if it is not valid raise an understandable error. Pandas would provide the reading and writing CSVs.

Getting at residuals after FIR-model-regression

Hi all,
I am not sure this is the right place to post (vs. neurostars?) since my question is in between a help call and an enhancement/documentation request - probably more of the former. I am trying to regress out task-related effects from fMRI data via FIR in order to not have to worry about wrong hemodynamics too much. I managed to set up the design matrix and estimate it:

hrf_model = 'FIR' #'Canonical With Derivative'
design_matrix = make_design_matrix(frame_times, paradigm, drift_model='polynomial',
                        drift_order=3, hrf_model = hrf_model, 
                        add_regs= motion, add_reg_names= reg_names, fir_delays=range(1,5)) #
plot_design_matrix(design_matrix)
fmri_glm = FirstLevelGLM().fit(fmri_img, design_matrix)

Now I am stuck trying to get at the residuals. Browsing through the examples I can get the contrasts (FIR-results would be interesting per se), but how would I proceed to do sth. along the lines of
`resid = y_act - y_hat

then convert resid into a 4d-nii-image and continue with e.g. nilearn to extract data from nodes?

`
Nistats looks really superpromising! Thanks!
Ralf.

fail fetch nipy data

Data absent, downloading...
Extracting data from /Users/alex/nilearn_data/nipy-data-0.2.tar.gz...
Error uncompressing file: CRC check failed 0x23fd1d03 != 0x5f3d7a64L
Archive corrupted, trying to download it again.

Extreme memory consumption

I got the impression in my own datasets that memory consumption was extreme. I was consuming around 22 gb with only 5 runs corresponding to one hour of acquisition with 1.5 mm voxels. and started to get memory errors from other programs like ANTS, that is also memory hungry.

So I started profiling memory consumption of the fmri fit within the examples:

Here is memory consumption for the simple localizer example that loads a 30mb nifti file. the fmri fit explodes to 500mb after having only 100mb in memory.

nistats_memprofile_30-05-2016

Here is the profile for the fiac analysis example

nistats_memprofile2_30-05-2016

I think this has to be handled with high priority. Likely in my latest dataset with 16 runs over 2 sessions it will crash. Will test it soon.

Redesign GLM API so that it behaves like sslearn/nilearn estimator

The idea is the following:

class FmriLinearModel(sklearn.BaseEstimator, sklearn.TransformerMixin, nilearn.CacheMixin):
    def __init__(**masker_parameters):
        """ just handle and check masker parameters """

    def fit(X, fMRI_data):
        """ Note: X is the design matrix ! 
            1. does a masker job: fMRI_data -> Y
            2. fit an ols regression to (Y, X)
            3. fit an AR(1) regression of require
        This results in an internal (labels_, regression_results_) internal parameters

        Note: handle lists of design matrices and fMRI datasets
        """

    def transform(contrast, output_type='z_map'):
        """ Generate different outputs corresponding to the contrasts provided e.g. z_map, t_map,  
             effects and variance. In multi-session case, outputs the fixed effects map.
        """

    def fit_transform(X, fmri_images, contrast, output_type='z_map'):
        """ Fit then transform"""

Any comment on this design ?

Nipype interfaces

Do you plan to provide nipype interfaces for your functions/classes (e.g. FirstLevelGLM)?

I am trying to decide whether I should stick to nipype all the way (and force everything - e.g. nistats - into its mold) or move away from it after the preprocessing...

nosetests falls into infinite try catch loop

RUnning nosetests nistats yields infinite:

Data absent, downloading...
Error uncompressing file: [Uncompress] unknown archive file format: /home/arthur/nilearn_data/spm_auditory/sub001/MoAEpilot.zip
Archive corrupted, trying to download it again.
fM00223/fM00223_004.img missing from filelist!

I reckon there is an ionvalid url there, and I guess retrying infinitly is not a desired behaviour.

handle irregular frame times

would it be useful to not assume frame times from slice time ref and t_r, but allow it to be passed directly?

masker and glm memcache innefficient for first level models

Testing memcache on my datasets I realized that it is taking too much time due to expensive check of the images giving by the masker, since the masker and glm are naively memcached separately. Moreover this is inneficient in terms of disk space and I/O cost for our purposes.

I think we should just cache the final output (results_, labels_) considering only the necessary variables (from first level model object parameters and fit method).

What do you think?

Add more examples of GLM fit

Based on public datasets. Try to vary the setting (block, event, phase-encoding, multi-session, possibly multi-subject).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.