Giter Club home page Giter Club logo

nipype's Introduction

image

NIPY

Neuroimaging tools for Python.

The aim of NIPY is to produce a platform-independent Python environment for the analysis of functional brain imaging data using an open development model.

In NIPY we aim to:

  1. Provide an open source, mixed language scientific programming environment suitable for rapid development.
  2. Create software components in this environment to make it easy to develop tools for MRI, EEG, PET and other modalities.
  3. Create and maintain a wide base of developers to contribute to this platform.
  4. To maintain and develop this framework as a single, easily installable bundle.

NIPY is the work of many people. We list the main authors in the file AUTHOR in the NIPY distribution, and other contributions in THANKS.

Website

Current information can always be found at the NIPY project website.

Mailing Lists

For questions on how to use nipy or on making code contributions, please see the neuroimaging mailing list:

https://mail.python.org/mailman/listinfo/neuroimaging

Please report bugs at github issues:

https://github.com/nipy/nipy/issues

You can see the list of current proposed changes at:

https://github.com/nipy/nipy/pulls

Code

You can find our sources and single-click downloads:

Tests

To run nipy's tests, you will need to install the pytest Python testing package:

pip install pytest

Then:

pytest nipy

You can run the doctests along with the other tests with:

pip install pytest-doctestplus

Then:

pytest --doctest-plus nipy

Installation

See the latest installation instructions.

License

We use the 3-clause BSD license; the full license is in the file LICENSE in the nipy distribution.

nipype's People

Contributors

alexsavio avatar blakedewey avatar bpinsard avatar byvernault avatar carlohamalainen avatar chrisgorgo avatar cindeem avatar davclark avatar djarecka avatar effigies avatar ellisdg avatar hjmjohnson avatar josephmje avatar kesshijordan avatar mgxd avatar mick-d avatar miykael avatar mnoergaard avatar mvdoc avatar mwaskom avatar oesteban avatar pintohutch avatar pndni-builder avatar rciric avatar salma1601 avatar satra avatar swederik avatar thechymera avatar tsalo avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nipype's Issues

fs.SegStats example is incorrect

As written, the docstring example for the SegStats interface class in freesurfer.model won't work. To use the avgwf_txt_file input, you need an in_file (which should be a functional or similar image in the same space as your seg or annot file).

implement reporting system

After pipeline execution or even when it is still running user would like to be able to have a quick glance on what where the outputs of the intermediate stages. An online reporting system could provide with such information. A simple solution would consist of generating an auto updating html page (similar to FSL approach).

Please let interface calls fail, when the underlying command fails

If I call e.g. BET with an improper option the actual call to BET fails, but the nipype line in Python returns without an error. I find this confusing, since it seems like an indication that everything it ok -- which it isn't.
If I call a Python function improperly -- it fails. If I want to create a file in Python in a place where it is not allowed/possible -- it fails. If I make any tool (via a nipype interface) crash/fail (maybe because it cannot create a file for the same reason as before) -- it doesn't fail -- but I cannot see the difference of the last usecase from any of the former.

Thanks for your consideration,

Michael

datagrabber should always run

since the underlying source files in the filesystem can change independent of the workflow, datagrabber should always run even if it's inputs haven't changed.

fsl_tutorial2 coregister node is wrong

In fsl_tutorial2, the inputs to the coregister node are reversed. The functional is used at the reference file, and the structural is used as the "in_file". I'm pretty sure that's wrong..

traits, handle mandatory x_or ??

My interface has two mutually exclusive but mandatory inputs, how can traits handle this?

do we have a mandatory_xor logic for our traits

Add Interface for Freesurfer's mri_surfcluster

There really should be an interface for the mri_surfcluster program, which is used to do multiple comparisons correction on the output of mri_glmfit (if using surfaces; mri_volcluster is the volume equivalent) as well as generating cluster summary tables. Of course, this will require mri_glmfit to have outputs.

doesn't fail if external is not actually available -- simply stalls

so, trying to run example1.py, didn't setup my MATLABPATH to point to spm, so it is not available... running example simply stalls after puking little error which is barely detectable among the other output lines

$> python example1.py
PE: checking connections:

PE: finished checking connections

No clients found. running serially
Executing: Realign.spm H: c49a2f44ee83424c23f90790042e55b0
directory /home/yoh/proj/nipy/nipype.gitsvn/doc/examples/test3/Realign.spm exists

continuing to execute

copying /home/yoh/proj/nipy/nipype.gitsvn/doc/examples/data/s1/f3.nii to /home/yoh/proj/nipy/nipype.gitsvn/doc/examples/test3/Realign.spm/f3.nii
option cwd not supported

doing quick look:
$> strace -p 27588
Process 27588 attached - interrupt to quit
select(6, [3 5], [], [], NULL

C <unfinished ...>
Process 27588 detached

$> ls -l /proc/27588/fd/6
lr-x------ 1 yoh yoh 64 2009-10-07 09:43 /proc/27588/fd/6 -> pipe:[2800813]

So it waits for input which would never come? ;)

Here is a traceback I got after hitting Ctrl-C
Traceback (most recent call last):

File "example1.py", line 37, in <module>

    pipeline.run()

File "/home/yoh/proj/nipy/nipype.gitsvn/install/lib/python2.5/site-packages/nipype/pipeline/engine.py", line 221, in run

    self.run_in_series()

File "/home/yoh/proj/nipy/nipype.gitsvn/install/lib/python2.5/site-packages/nipype/pipeline/engine.py", line 252, in run_in_series

    node.run()

File "/home/yoh/proj/nipy/nipype.gitsvn/install/lib/python2.5/site-packages/nipype/pipeline/node_wrapper.py", line 135, in run

    self._run_interface(execute=True)

File "/home/yoh/proj/nipy/nipype.gitsvn/install/lib/python2.5/site-packages/nipype/pipeline/node_wrapper.py", line 214, in _run_interface

    self._result = self._interface.run()

File "/home/yoh/proj/nipy/nipype.gitsvn/install/lib/python2.5/site-packages/nipype/interfaces/spm.py", line 125, in run

    results = super(SpmMatlabCommandLine?,self).run()

File "/home/yoh/proj/nipy/nipype.gitsvn/install/lib/python2.5/site-packages/nipype/interfaces/matlab.py", line 66, in run

    results = self._runner()

File "/home/yoh/proj/nipy/nipype.gitsvn/install/lib/python2.5/site-packages/nipype/interfaces/base.py", line 273, in _runner

    runtime.stdout, runtime.stderr = proc.communicate()

File "/usr/lib/python2.5/subprocess.py", line 670, in communicate

    return self._communicate(input)

File "/usr/lib/python2.5/subprocess.py", line 1213, in _communicate

    rlist, wlist, xlist = select.select(read_set, write_set, [])

http://sourceforge.net/apps/trac/nipy/ticket/38

Add support for non-file outputs caching

When a non-file (i. e. integer, string, array etc) output is passed from one node to another it is not saved anywhere. Therefore upon rerunning a workflow things break.

pickling prevents changing working directory

prior to pickling outputs, one could change working directory without having to rerun already processed nodes. because the pickled file stores an absolute path, this will now return erroneous output.

one solution would be to move towards relative paths within the workflow.

SUSAN always reruns

SUSAN (FSL's smoothing interface) seems to always rerun, even when none of its inputs have changed. This cascades and forces rerunning of the model.

Optimize data transfer between pure python nodes

Currently when two pure python nodes (like nipy GLM fit and contrast estimate) are connected (A->B) node A saves the data to file and node B reads it. The reading phase could be skipped by sending the data directly to node B. This, however, creates some problems:

  1. It will not work with "timestamp" hashing.
  2. Big chunks of data will have to be kept in memory until receiving nodes will be executed.

Improve memory consumption of nipype for big pipelines

nipype (both in parallel and serial modes) can be a real memory hog for big jobs (thousands of nodes). It tends to accumulate more and more memory during the execution. I believe that storing too much debugging data might be a problem (like keeping all of the node objects). Ideally after successful execution everything that will not be used should be removed from memory.

PDF documentation fails to build

Doing make -C doc pdf results in

...
pdfTeX warning: pdflatex (file ./inheritance-a6ea610817660597b4a97d1eef82a61a61
49018b.pdf): PDF inclusion: Page Group detected which pdfTeX can't handle. Igno
ring it.

] [71] [72] [73]
Underfull \hbox (badness 10000) in paragraph at lines 5250--5252
[]\T1/ptm/m/n/10 key, value pairs that will up-date the Es-ti-mate-Model.inputs
at-tributes see
[74] [75]

! LaTeX Error: Too deeply nested.

See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help.
...

l.5520 \begin{description}

attaching that .tex file (if I targetted it correctly).

I have python-sphinx 0.6.3-1, nipype rev 647

http://sourceforge.net/apps/trac/nipy/ticket/56

Please allow cleaning the workflow working directory

Some node creates output files that are not used as input files in any other node of a workflow. To be able to efficiently recompute workflows when parts of the data change it is necessary to keep the workflow working directory. But the directory content might be a lot heavier than what is necessary to avoid unnecessary recomputing. Please add an option/command that strips all pieces from the working directory that are irrelevant for recomputing parts (or the full) workflow.

Thanks for your consideration,

Michael

os.path.relpath appeared in 2.6

os.path.relpath appears in python 2.6
and is used eg. in nipype/pipeline/utils.py
Pipelines therefore fails on my 2.5 install.

Default suffix for fs.ApplyVolTransform is confusing

This is pretty small-bore, but I just spent a while getting confused by this. ApplyVolTransform writes a *_warped.nii image, but the underlying program only deals with linear/affine transforms, not warps, so this is pretty misleading.

Add .mincost file to freesurfer.BBRegister outputs

Freesurfer's bbregister program generates a file called register.dat.mincost (where register.dat is whatever name is used for the transform matrix that gets created) that includes an estimate of registration performance, intended to guide the user to registrations that might need fixing. It would be good to have this included in the BBRegister class outputs, as the parsing of that file can then be easily automated to save time/encourage good quality control.

unify naming of in*file* arguments in AFNI interfaces to the rest of the suite

While the other interfaces use in_file, AFNI still has infile

Since it would break API, I guess it would be logical to aim for 0.4.0, thus I added a new label (hope it is ok)

$> git grep '\Winfile.*File'
doc/devel/interface_specs.rst: infile = File(exists=True,
nipype/interfaces/afni/preprocess.py: infile = File(desc = 'input file to 3drefit',
nipype/interfaces/afni/preprocess.py: infile = File(desc = 'input file to 3dresample',
nipype/interfaces/afni/preprocess.py: infile = File(desc = 'input file to 3dTstat',
nipype/interfaces/afni/preprocess.py: infile = File(desc = 'input file to 3dAutomask',
nipype/interfaces/afni/preprocess.py: infile = File(desc = 'input file to 3dvolreg',
nipype/interfaces/afni/preprocess.py: infile = File(desc = 'input file to 3dvolreg',
nipype/interfaces/afni/preprocess.py: infile = File(desc = 'input file to 3dZcutup',
nipype/interfaces/afni/preprocess.py: infile = File(desc = 'input file to 3dAllineate',

Query packages for default values

If possible, can we query the underlying packages (SPM, FSL, Freesurfer...) for default values on our inputs? This may not be possible.

FAST outputs refer to non-existing files.

When I want to access the bias_field and restored_image outputs of a FAST node I get nothing.
It looks like it searches for restore%i and bias%i files. However, there are no numbered outputs. Instead FAST produces _restore and _bias files. I did not test this with multi-domain input (e.g. T1 and T2), but for a single T1 input this is the case.

DataSink docstring incorrect

I'm pretty sure, anyway. The Datasink docstring says to use e.g. contrasts@con to stick you contrast images in a contrasts directory. But when I use that, I get a contrasts@con directory with my contrast images in it. Doing contrasts.@con, on the other hand, gives me the behavior I expect. This is also what's done in the tutorials. So either I'm totally confused, or that docstring is wrong.

DataSink crashes when it gets a list

If you try to connect a the output of MapNode to a DataSink, it crashes when it tries to do os.path manipulation on the input and gets a list. DataSink itself can't be a MapNode (I don't think -- because it assigns the iterfield before the dynamic input traits are created) so it's impossible to save the output of certain (lots) of nodes.

FNIRT output cannot be set directly

I'm using interfaces, but not in the context of a workflow, and I'd like to set the fieldcoeff_file trait with the name of the file I would like to write. After fnirt runs though, it raies a FileNotFound exception looking for the default filename (ie, if I'm passing in T1.nii.gz, it's looking for T1_fieldwarp.nii.gz). The fieldwarp file has in fact been written, but with the name I asked for not the default name. I think this is a bug, or it could just be a result of the traits transition I haven't seen before.

Documentation refers to non-existing workflow.export_graph()

It looks like this could be a simple renaming bugfix export_graph -> write_graph. However, export_graph from pipeline.utils has e.g. show= that is not available in write_graph, hence not a drop in replacement. If somebody could clarify this issue I can provide a tentative fix.

Thanks

state information for command

  • environment variables
  • spm/fsl/matlab version
  • command version if different from parent package version
  • cpu usage
  • memory usage
  • time taken to run
  • disk space used
  • IP address(es) of machine(s) it was run on
  • user id
  • process id
  • parent pid

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.