nilearn / nilearn Goto Github PK
View Code? Open in Web Editor NEWMachine learning for NeuroImaging in Python
Home Page: http://nilearn.github.io
License: Other
Machine learning for NeuroImaging in Python
Home Page: http://nilearn.github.io
License: Other
An error occurs in the downloader when trying to fetch a dataset : the function join used to build the URLs takes its parameters in one array whereas several parameters are given in the code.
Nilearn contains a lot of concepts that need to be explained (e.g. niimg(s)). Adding a glossary to the documentation could be a good idea.
This is also a mean to improve the naming conventions in the code.
The website has not been rebuilt since transition from nisl to nilearn and many references to nisl are still there. I'll rebuild the website ASAP.
I have put a <link rel="canonical" href="...">
tag in the header of nisl pages. Once we see in google requests that results are pointing to nilearn website, I suggest that we delete completely the nisl website to replace it by a simple redirection.
The buttons on the top should be:
pointing to the respective parts of the tutorial. @jaquesgrobler, I think that you now this aspect of the doc-generate reasonnably well. Can you take care of this?
I don't know if a mailing list is needed for the moment but there is absolutely no information about how to contact us in case of problems.
Here are a few suggestions to reorganize a bit the code layout. I am opening this issue for discussion.
Just to confirm, do I read the code correctly, that the radius parameter is based on voxel and not on mm, as it is based on the mask_indices and sklearn.neighbors.NearestNeighbors
? In that case I don't understand why the the radius parameter is expected to be float?
https://github.com/nisl/tutorial/blob/master/nisl/searchlight.py#L183
I would like to file a PR to clarify the docstring, but first wanted to check with you guys.
In [4]: run plot_rest_clustering.py
Downloading data from http://www.nitrc.org/frs/download.php/1071/NYU_TRT_session1a.tar.gz ...
Downloaded 697388519 of 697388519 bytes (100.00%, 0.0s remaining)
...done. (423 seconds, 7 min)
extracting data from /Users/alex/work/src/nisl/tutorial/nisl_data/nyu_rest/NYU_TRT_session1a.tar.gz...
...done.
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/site-packages/IPython/utils/py3compat.pyc in execfile(fname, *where)
176 else:
177 filename = fname
--> 178 __builtin__.execfile(filename, *where)
/Users/alex/work/src/nisl/tutorial/plot_rest_clustering.py in <module>()
19 import numpy as np
20 from nisl import datasets, io
---> 21 dataset = datasets.fetch_nyu_rest(n_subjects=1)
22 nifti_masker = io.NiftiMasker()
23 fmri_masked = nifti_masker.fit_transform(dataset.func[0])
/Users/alex/work/src/nisl/tutorial/nisl/datasets.pyc in fetch_nyu_rest(n_subjects, sessions, data_dir, verbose)
741 _fetch_dataset('nyu_rest', [url], data_dir=data_dir,
742 folder=session_path, verbose=verbose)
--> 743 files = _get_dataset("nyu_rest", paths, data_dir=data_dir)
744 for i in range(len(subjects)):
745 # We are considering files 3 by 3
/Users/alex/work/src/nisl/tutorial/nisl/datasets.pyc in _get_dataset(dataset_name, file_names, data_dir, folder)
416 full_name = os.path.join(data_dir, file_name)
417 if not os.path.exists(full_name):
--> 418 raise IOError("No such file: '%s'" % full_name)
419 file_paths.append(full_name)
420 return file_paths
IOError: No such file: '/Users/alex/work/src/nisl/tutorial/nisl_data/nyu_rest/session1/sub05676/anat/mprage_anonymized.nii.gz'
If one wants to compute a mask over several subjects with differents affines, the NiftiMultiMasker cannot do it.
There are two solution for this problem:
I think the second solution is the best even if it implies adding options to compute_multi_epi_mask.
What do you think ?
In MultiPCA, if the user provides a masker, we clone it. Unfortunately, in joblib 0.6, a bug prevent cloning a Memory object. This issue can be addressed by testing joblib version (patch proposed by Philippe):
if isinstance(self.mask, NiftiMultiMasker) \
and sklearn.externals.joblib.__version__.startswith("0.6"):
# Dirty workaround for a joblib bug
# Memory with cachedir=None cannot be cloned in version 0.6
# of joblib.
masker_memory = self.mask.memory
if masker_memory.cachedir is None:
self.mask.memory = None
self.masker_ = clone(self.mask)
self.mask.memory = masker_memory
self.masker_.memory = Memory(cachedir=None)
else:
self.masker_ = clone(self.mask)
del masker_memory
else:
self.masker_ = clone(self.mask)
but, unfortunately, this breaks Travis build:
======================================================================
ERROR: nisl.decomposition.tests.test_multi_pca.test_multi_pca
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/travis/build/nilearn/nilearn/nisl/decomposition/tests/test_multi_pca.py", line 44, in test_multi_pca
multi_pca.fit(data[:2])
File "/home/travis/build/nilearn/nilearn/nisl/decomposition/multi_pca.py", line 211, in fit
and sklearn.externals.joblib.__version__.startswith("0.6"):
AttributeError: 'module' object has no attribute '__version__'
I have made a dirty fix for the moment but we should fix this ASAP.
I have noticed that sometimes 3D images are stored as 4D images with 1 frame. We should handle this kind of data.
The transform of NiftiMultiMasker could really use parallelism, as it is CPU bound. The object should take an 'n_jobs' as an init parameter.
This should be done only after the merge is over.
Also in multiple session compute mask function...
Looks like the test file nilearn/tests/test_searchlight.py got lost during/after the nisl -> nilearn transition
I just tried to use the NiftiMasker on a 3D file, and it is complaining. Is there a reason why we are not accepting 3D files?
When a Nifti1Image is passed to a function, its representation contains a pointer to data in memory and therefore breaks the joblib cache system.
This is not really a joblib issue, nor a nibabel one, this is why I open it here.
Make automated builds of the doc and push it to webpage
Guys, dunno what happened but most images on the website are broke, e.g. http://nisl.github.io/auto_examples/plot_haxby_simple.html#example-tutorial-plot-haxby-simple-py
and also http://nisl.github.io/getting_started.html#data-loading-and-preprocessing
NeuroDebian provides datasets that can be installed as packages (haxby2001-faceobject for example).
By default, Nisl should be able to browse the folder where these data are store (/usr/share/data on my computer) to load them if they are present.
I just tried to run the first cell of Sec 2.1, but I'm getting this error:
In [8]:
from nisl import datasets
haxby_data = datasets.fetch_haxby()
# The data is then already loaded as numpy arrays:
haxby_data.keys()
haxby_data.data.shape
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-6127e8737e4d> in <module>()
3 # The data is then already loaded as numpy arrays:
4 haxby_data.keys()
----> 5 haxby_data.data.shape
AttributeError: 'Bunch' object has no attribute 'data'
That's because in fact, the haxby_data
object has different fields from what the docs say:
In [9]:
haxby_data
Out[9]:
{'conditions_target': '/home/fperez/talks/slides/1210_pycon_canada/nisl_data/haxby2001/pymvpa-exampledata/attributes_literal.txt',
'func': '/home/fperez/talks/slides/1210_pycon_canada/nisl_data/haxby2001/pymvpa-exampledata/bold.nii.gz',
'mask': '/home/fperez/talks/slides/1210_pycon_canada/nisl_data/haxby2001/pymvpa-exampledata/mask.nii.gz',
'session_target': '/home/fperez/talks/slides/1210_pycon_canada/nisl_data/haxby2001/pymvpa-exampledata/attributes.txt'}
Unfortunately I need to run out to catch a flight and can't debug this one now, but I figured at least I'd report it... This is running on a fresh install of nisl from just a moment ago.
The idea being that for objects that are not 'masker' objects, you should be able to pass in a 'Masker' object as a mask argument and it would be used to compute the mask with its 'fit' method.
Dear all,
My attempts to install nilearn have failed so far, I cannot seem to
find what the problem is.
My system has Ubuntu Linux with python 2.7.3
(python3 and python3.2 also present but not default)
and
numpy v1.6.1
scipy v0.9.0
sklearn v0.14.1
matplotlib v1.1.1rc
The steps I have taken are:
But the messages I get (see attached) suggests that many library import fail.
Does this mean that there still are unmet dependencies?
Many thanks for your help!
With best wishes
Alle Meije Wink
And backport that to scikit-learn.
3822.42 seconds in my machine. Mostly because of Searchlight.
In any case this makes it really painful to build the doc and to the user it looks as if the example was broken (there is no output whatsoever for one hour).
This is blocking to have the nightly documentation builds.
As Nisl has no graphical identity for the moment, we left the scikit-learn layout for the documentation. Changing it is planned but, to avoid confusion, I suggest that we change at least the logo so that users can distinguish Nisl for scikit-learn.
Cc @schwarty that I believe already coded the XML reader.
I suggest the following new API for the recursive caching patterns:
memory_level
(integer), mem
(joblib.Memory)make_child
that returns an object of the same class with the same mem and a memory_level smaller (where the decrement would be an optional argument of the method)cache
that has the same signature as Memory.cache, but with an additional optional level
parameter. That parameter is compared to the memory_level of the object to know whether to cache or not.check_memory
that can take a string or a joblib.Memory or a NiLearnMemory object (with optional memory_level) and always returns a NiLearnMemory object.The combination of check_memory and 'mem.make_child' should cover all our needs in NiLearn and enable us to remove the 'memory_level' parameters and 'cache' function calls in our code.
I am not sure that I like the idea of having different 'memory' arguments (as in the maskers). I'd prefer having a 'memory_level' and always use the same memory object.
Rename
It is not used so far. We can always add it back in.
because they do not appear on setup.py. Fix could be something like
diff --git a/setup.py b/setup.py
index 621c95c..17bccf4 100644
--- a/setup.py
+++ b/setup.py
@@ -30,6 +30,8 @@ def configuration(parent_package='', top_path=None):
config = Configuration(None, parent_package, top_path)
config.add_subpackage('nisl')
+ config.add_subpackage('nisl/io')
+ config.add_subpackage('nisl/decomposition')
return config
The main project has been renamed nilearn, but the main package is still named nisl.
An example (with a screwed up proxy setting):
plotting plot_visualization.py Downloading data from http://www.pymvpa.org/files/pymvpa_exampledata.tar.bz2 ... HTTP Error: HTTP Error 407: Proxy Authentication Required http://www.pymvpa.org/files/pymvpa_exampledata.tar.bz2 An error occured, abort fetching extracting data from None... archive corrupted, trying to download it again Downloading data from http://www.pymvpa.org/files/pymvpa_exampledata.tar.bz2 ... HTTP Error: HTTP Error 407: Proxy Authentication Required http://www.pymvpa.org/files/pymvpa_exampledata.tar.bz2 extracting data from None... ________________________________________________________________________________ plot_visualization.py is not compiling: Traceback (most recent call last): File "/home/varoquau/dev/nisl/tutorial/doc/sphinxext/gen_rst.py", line 308, in generate_file_rst execfile(os.path.basename(src_file), my_globals) File "plot_visualization.py", line 10, in haxby_files = datasets.fetch_haxby_simple() File "/home/varoquau/dev/nisl/tutorial/nisl/datasets.py", line 480, in fetch_haxby_simple resume=resume) File "/home/varoquau/dev/nisl/tutorial/nisl/datasets.py", line 376, in _fetch_dataset _uncompress_file(full_name) File "/home/varoquau/dev/nisl/tutorial/nisl/datasets.py", line 201, in _uncompress_file data_dir = os.path.dirname(file) File "/usr/lib/python2.7/posixpath.py", line 120, in dirname i = p.rfind('/') + 1 AttributeError: 'NoneType' object has no attribute 'rfind' ________________________________________________________________________________
The traceback is not the right one
As Kamitani dataset data format has changed, the Kamitani example cannot run for the moment. We should bring it back, either by parsing the new data format or by uploading data formatted for nisl somewhere.
Copy from scipy-lectures the strategy to expose PDF (with double page layout) and ZIP file download on the front page.
Some work may be required to get proper rendering of the PDF.
io is too generic, and when people import it in the code, it looses its meaning.
When we use the Google Search, the search is done on the scikit learn website instead of Nisl.
In plot_haxby_decoding.py, numpy.in1d is called. This function has been added to numpy 1.4.0 but numpy >= 1.3.0 is required for the scikit-learn. Should we require numpy 1.4.0 ?
Related numpy man page : http://docs.scipy.org/doc/numpy/reference/generated/numpy.in1d.html
Traceback (most recent call last):
File "plot_haxby_decoding.py", line 44, in <module>
condition_mask = np.in1d(conditions, ('face', 'house'))
AttributeError: 'module' object has no attribute 'in1d'
We need to add a '' to the 'head' in the NISL website.
Formerly, if a MultiNiftiMasker was given to MultiPCA, along with masker arguments (smoothing_fwhm...), then parameters of the MultiNiftiMasker were overriden by the ones given to MultiPCA.
After discussion with Philippe, we find much more consistant to ignore masker parameters given to MultiPCA in this case (if the user gives us a masker, we assume that he knows what he's doing).
This is a thread to discuss and validate this new behavior.
In any case, the code must be cleaned as, for the moment, the user is warned that memory and memory_level are ignored, which should not be the case.
(p26)fabian@:nisl-tutorial(master)$ make html
sphinx-build -b html -d ../webpage/doctrees . ../webpage/html
Error: Source directory doesn't contain conf.py file.
make: *** [html] Error 1
I dump a lot niimages in my nilearn pipeline. Unfortunately, this takes a lot of disk space under nifti format because data is not masked and I don't want to gzip it because it is time consuming. The best way to save it is under the .npy form, along with the corresponding mask (ideally, a path to the mask). This way, check_niimg could unmask my images on the fly.
I suggest to add a tuple (npy filepath / numpy array, mask niimage) as a niimage for compression purpose.
It would be very useful for me if the function create_simulation_data
could be moved from the example plot_simulated_data.py
into nisl/datasets so that it can be imported from external projects.
I am willing to submit a patch if people are OK with this. Comments?
As our template is based on the scikit-learn one (we even kept the logo), a user asked a question about nilearn on the scikit-learn mailing list... I answered him in private and apologize but this should not happen again.
I have quickly changed nilearn logo and switched some colors in the CSS to break the likeness with the scikit learn coloring scheme (as an emergency action). I think that a brand new website is under construction. What should we do in the meantime ?
In plot_haxby_decoding, some assertions are given about Haxby dataset shape in comments :
# fmri_data.shape is (40, 64, 64, 1452)
# and mask.shape is (40, 64, 64)
These assertions are now untrue because Haxby dataset has been cropped.
In [16]: fmri_data.shape
Out[16]: (40, 49, 41, 1452)
We should propagate these modifications and make it a doctest or a real assertion to avoid further problems.
I took a look at the test: actually the ICA estimate of the ICs is really poor: it fails to recover the components, and actually gets linear combinations of these (meaning that the failure is related to the ICA part). The initial test is quite robust to this effect, but not enough, at least on my box.
Using
rng = np.random.RandomState(1)
in the test solves the issue on my box, but I don't find it satisfactory.
Possible solutions are:
In BaseMasker, transform_single_niimgs sets the 'affine_' attribute. This shouldn't happen.
The current heuristic for computing a mask works well only if the data is raw EPI. If it is an activation map, we want a different heuristic, such as 'threshold', where the mask is given by the values that are above mask_lower_cutoff.
I think that we should introduce yet another option 'mask_strategy' that could be 'epi', for the current behavior, or 'threshold', and maybe later other options.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.