Giter Club home page Giter Club logo

amico's People

Contributors

anbai106 avatar annaauria avatar arnaudbore avatar cfhammill avatar cookpa avatar daducci avatar davidrs06 avatar dicemt avatar erickhernandezgutierrez avatar frheault avatar fullbat avatar ianhinder avatar jchoude avatar kohske avatar marioocampo avatar mattcieslak avatar mdesco avatar nightwnvol avatar octomike avatar palombom avatar samuelstjean avatar steelec avatar tomwright01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amico's Issues

SPAMS Multi-Threading Issue stopping AMICO analysis (cluster)

Hi,

I have asked about this issue before (#56 (comment)), but thought I would open a new thread since it is stopping all AMICO analysis at our new cluster. There used to be a fix to limit the number of cores (as reported in #56, that does not work anymore). I need to limit the number of cores spam uses, because I have limited capacity available to use (and our cluster kills all amico jobs because they are using too many cores).

Is there workaround for this at the moment?

[I use AMICO python, my spams version is python-spams from conda-forge channel]

Many thanks,

Maria.

Problem importing AMICO - No module named 'core'

Hi,

I got this error after trying to import AMICO:

dilara@dilara-500-250et:~$ python
Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49) 
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import spams
>>> import amico
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/dilara/anaconda3/lib/python3.6/site-packages/amico/__init__.py", line 1, in <module>
    from core import Evaluation
ModuleNotFoundError: No module named 'core'

I do not get any errors during installation.

dilara@dilara-500-250et:~/AMICO-master$ pip install .
Processing /home/dilara/AMICO-master
Requirement already satisfied: dipy in /home/dilara/anaconda3/lib/python3.6/site-packages (from amico==1.0) (0.14.0)
Requirement already satisfied: nibabel>=2.1.0 in /home/dilara/anaconda3/lib/python3.6/site-packages (from dipy->amico==1.0) (2.3.0)
Requirement already satisfied: h5py>=2.4.0 in /home/dilara/anaconda3/lib/python3.6/site-packages (from dipy->amico==1.0) (2.7.0)
Requirement already satisfied: numpy>=1.7 in /home/dilara/anaconda3/lib/python3.6/site-packages (from h5py>=2.4.0->dipy->amico==1.0) (1.13.3)
Requirement already satisfied: six in /home/dilara/anaconda3/lib/python3.6/site-packages (from h5py>=2.4.0->dipy->amico==1.0) (1.11.0)
Building wheels for collected packages: amico
  Running setup.py bdist_wheel for amico ... done
  Stored in directory: /home/dilara/.cache/pip/wheels/d1/94/31/61888e4e578b43385618bb376053b2adb983ba5816227b51e9
Successfully built amico
Installing collected packages: amico
Successfully installed amico-1.0

SPAMS and Camino Toolkit seems to be working fine as far as I can tell. I am using Ubuntu 16.04 and Anaconda3-5.0.1. Can you tell what is wrong with my installation?

Best,
Dilara

Small variation in b-values interpreted as unique shells

Pretty much the same issue as the one referred to here (#10), but in the python version of AMICO.
I have recently started using the python version (currently trying to write a nipype wrapper for it, to incorporate AMICO into a NODDI analysis pipeline), and I came across the problem of minimal variations in the b values being considered as different shells.
I don't have much experience in python (much less than in Matlab), so I don't really know how to fix the issue in a similar manner (make fsl2scheme round the values). Any tips?
Thanks!

Spams installation

Hi,

Sorry to write here because I know this is not supposed to be a help line. I have fought with this for a while and can't use AMICO if spams isn't installing correctly.

I have tried on both on mac os and a fresh ubuntu 16.04 install and run into gcc errors on both of the os's. I'll focus on Ubuntu because it is the easiest to re-create.

I am working in Python2.7 and have installed both using anaconda and python-pip. I have also installed numpy using pip install for the python-pip. I have tried to address dependencies by installing build-essential, libatlas-dev, and libatlas3-base. In following the listed installation instructions I have ensured that I have libblas.so (actually libblas.so and libblas.so.3) and liblapack.so (actually liblapack.so.3 and liblapack_atlas.so.3 [I added 'lapack_atlas' to the libs in setup.py]) are in /usr/lib. I have run apt-get update to ensure I am on a stable build as well.

For both prefix and build installation I run into some interesting errors.

Main exit error: 'x86_64-linux-gnu-gcc' failed with exit status 1

Additional warnings and errors thrown:

  1. cc1plus: warning command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++

  2. spams/linalg/linalg.h:1680:10 error: expected '(' before '__builtin_isnan

  3. Warning # warning using deprecated NumPy API

  4. Warning: comparison between signed and unsigned integer expressions

Do you have any idea what could be going wrong?

Thank you ,

Gary

NODDI returns NaN values

Hello Daducci,

We ran AMICO-NODDI on our data at two different systems - one locally and one on supercomputer. However the results on supercomputer return NaN value for the same data. Is there a way to troubleshoot it? We also tried comparing few matrices and we did not find any differences per say. Any help would be appreciated!

Thanks!

AMICOX

Hello,
I would like to use AMICO in the crossing (AMICOX). In my particular case, it will be useful to have the maps FIT_a, FIT_v for 2 directions. Hence, I would like to have FIT_a_dir1, FIT_v_dir1, FIT_a_dir2, FIT_v_dir2. You can generalize the question for three peaks.

Cheers,

Muhamed

Bug for ae.fit()

Hello, thanks for your work, I found a problem when fit the noddi model, ae.fit()
the error is

Traceback (most recent call last):
File "amico_noddi_tutorial.py", line 27, in
ae.fit()
File "/aramis/home/wen/Applications/Anaconda2/lib/python2.7/site-packages/amico/core.py", line 327, in fit
MAPs[ix,iy,iz,:], DIRs[ix,iy,iz,:], x, A = self.model.fit( y, dirs.reshape(-1,3), self.KERNELS, self.get_config('solver_params') )
File "/aramis/home/wen/Applications/Anaconda2/lib/python2.7/site-packages/amico/models.py", line 581, in fit
A[:,:nWM] = KERNELS['wm'][:,i1,i2,:].T
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None)

So I dig into the source code, and I found the problem is from amico/models/fit function:

i1, i2 = amico.lut.dir_TO_lut_idx( dirs[0] )

go inside the dir_TO_lut_idx func,

#i1 = np.round( i1/np.pi*180.0 ) #i2 = np.round( i2/np.pi*180.0 )

I just change it to
i1 = np.round( i1/np.pi*180.0 ).astype(int) i2 = np.round( i2/np.pi*180.0 ).astype(int)

and all goes well...

I dont know if this is the problem, can you have a lool at it??? I saw another issue who got the same problem with me.

If it is in the case, I would like to make a pull requests:)

Thanks in advance

Hao

change output directory (python)

Hello there,
First, thanks very much for coding this in python! I am currently testing it with the HCP dataset (1.25mm, 3 shells, 90 dirs in each).

To fit with more generic input output toolchains, would it be possible to add a parameter to ae.save_results to allow the user to explicitly set an output directory to override the RESULTS_path?

e.g., ae.save_results(RESULTS_path="/home/bob/HCP/processing/dwi/NODDI/P001/")

If you like, I would be happy to add and test this.

Cheers,
Chris

volume fractions in the CylinderZeppelinBall model

In the CylinderZeppelinBall model the volume fraction of the Ball is not considered in the normalization of the maps.

 # return estimates
 f1 = x[ :(nD*n1) ].sum()
 f2 = x[ (nD*n1):(nD*(n1+n2)) ].sum()
 v = f1 / ( f1 + f2 + 1e-16 )

Is there a reason? It is useful to have consistent maps with COMMIT.

Error : Problems generating the signal with datasynth

Hello,

First, thank you for your excellent software!

I am trying to run ActiveAx on some ex-vivo monkey data and I got the following error with AMICO_GenerateKernels :
* Creating high-resolution scheme:
[ DONE ]
* Simulating kernels with "ActiveAx" model:
- A_001... Error using AMICO_ACTIVEAX/GenerateKernels (line 68)
[AMICO_ACTIVEAX.GenerateKernels] Problems generating the signal with datasynth
Error in AMICO_GenerateKernels (line 72)
CONFIG.model.GenerateKernels( ATOMS_path, schemeHR, AUX, idx_IN,
idx_OUT );

I run the NODDI model on these data without a problem. I also run the ActiveAx tutorial without the above issue. Do you have any idea what the problem might be ?

Best,
Sophie

Question: double rescaling ?

Hi,

I was looking at the code and was confused about a thing :
A rescaling is done at line 100 in core.py :

if ( np.isfinite(hdr['scl_slope']) and np.isfinite(hdr['scl_inter']) and hdr['scl_slope'] != 0 and
( hdr['scl_slope'] != 1 or hdr['scl_inter'] != 0 ) ):
print '\t\t- rescaling data',
self.niiDWI_img = self.niiDWI_img * hdr['scl_slope'] + hdr['scl_inter']
print "[OK]"

I thought nibabel was already rescaling the data automatically ?

They say here [http://nipy.org/nibabel/nifti_images.html] that [By default, nibabel will take care of this scaling for you, but there may be times that you want to control the data scaling yourself. If so, the next section describes how the scaling works and the nibabel implementation of same] but also that the values are set back to NaN after the rescaling [When you load an image, the header scaling values automatically get set to NaN (undefined) to mark the fact that the scaling values have been consumed by the read. The scaling values read from the header on load only appear in the array proxy object.] . If it's the case that means the rescaling is applied twice ?

Thanks for your help :)

ICVF maxing at .990000000

Hello Dr. Daducci,

I'm attempting to use this toolbox to fit the NODDI model to a few different datasets. The speed of the processing as compared to the original matlab toolbox is incredible! I'm having a few issues with the outputs.

In every dataset I've used (including HCP), I'm getting max values for ICVF of 0.99000000537 (see attached image).

screenshot from 2018-11-06 16-05-22

I'm using default settings for the NODDI model. I looked through the open and closed issues for anyone else who may have had this issue, and the only potential solution I saw was making sure the bvalues are in the proper scale (i.e. s/mm^2), which my bvalues are.

Any help with this issue would be greatly appreciated!

Thank you,
Brad Caron

Python3 compatibility

Hi,
I am trying to run amico with python3 since we want to move all our software to this version soon.
Two problems that i found:
"amico/preproc.py", line 37, debiasRician: print without parenthesis
and

"amico/lut.py", line 73, in load_precomputed_rotation_matrices
    return pickle.load( open(filename,'rb') )

UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 1: ordinal not in range(128)

I have a working but ugly workaround for the second one:

if sys.version_info[0] == 3:
  aux = pickle.load( open(filename,'rb'), fix_imports=True, encoding='bytes' )
  result_aux = {k.decode(): v for k, v in aux.items()}
  return result_aux
else:
  return pickle.load( open(filename,'rb') )

I have to use sys.version_info because the python2 version of pickle does not know the argument fix_imports.

It would be nice if you could make the changes, if not i can make a pull request.
Cheers

Error in ae.fit

Hello,
I have started the using the python version of AMICO after having only used the Matlab version before. Everything runs smoothly up to ae.fit where I get the error below:

The bvec, bval and scheme files are the same that worked fine in the Matlab version (I attach an example). Also, this is the fsl2scheme line:

amico.util.fsl2scheme("bval.bval", "bvec.bvec", schemeFilename = "NODDI.scheme", bStep = 1000, delimiter = None)

Bvec_example.zip

Python version: 2.7
AMICO version: amico-1.0-py2.7

Would anyone know how to fix it?

Thanks,

Maria.

In [24]: ae.fit()

-> Fitting "NODDI" model to 101774 voxels:
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-24-fad37d61652c> in <module>()
----> 1 ae.fit()

/apps/software/anaconda/4.3.1/lib/python2.7/site-packages/amico/core.py in fit(self)
    282                 gtab = gradient_table( np.hstack((0,self.scheme.b[self.scheme.dwi_idx])), np.vstack((np.zeros((1,3)),self.scheme.raw[self.scheme.dwi_idx,:3])) )
    283             else:
--> 284                 gtab = gradient_table( self.scheme.b, self.scheme.raw[:,:3] )
    285             DTI = dti.TensorModel( gtab )
    286         else :

/apps/software/anaconda/4.3.1/lib/python2.7/site-packages/dipy/core/gradients.py in gradient_table(bvals, bvecs, big_delta, small_delta, b0_threshold, atol)
    250                                            small_delta=small_delta,
    251                                            b0_threshold=b0_threshold,
--> 252                                            atol=atol)
    253 
    254 

/apps/software/anaconda/4.3.1/lib/python2.7/site-packages/dipy/core/gradients.py in gradient_table_from_bvals_bvecs(bvals, bvecs, b0_threshold, atol, **kwargs)
    138     bvecs_close_to_1 = abs(vector_norm(bvecs) - 1) <= atol
    139     if bvecs.shape[1] != 3 or not np.all(bvecs_close_to_1[dwi_mask]):
--> 140         raise ValueError("bvecs should be (N, 3), a set of N unit vectors")
    141 
    142     bvecs = np.where(bvecs_close_to_1[:, None], bvecs, 0)

ValueError: bvecs should be (N, 3), a set of N unit vectors

Scheme file loop through unique b-values

In AMICO if we use the Stejskal-Tanner scheme and we have for a certain b-value different TE, is not possible to generate the kernels due to the fact that the 'idx' for each shell are defined as unique b-values.

core.py line 144 problem with the print

Just cloned AMICO and couldn't make it work because of a print issue with core.py line 144. Changed the line to print('\t* Debiasing signal...\n'),
and everything worked afterwords.

MacOSX error with numpy.newaxis

Hey guys,

I'm getting this error only on my MacOSX version of AMICO. Same code, same data runs fine on Linux.

Traceback (most recent call last):
File "NODDI_kaitlyn.py", line 48, in
ae.fit()
File "/Users/desm2239/Research/Source/AMICO/python/amico/core.py", line 304, in fit
MAPs[ix,iy,iz,:], DIRs[ix,iy,iz,:], x, A = self.model.fit( y, dirs.reshape(-1,3), self.KERNELS, self.get_config('solver_params') )
File "/Users/desm2239/Research/Source/AMICO/python/amico/models.py", line 555, in fit
A[:,:-1] = KERNELS['wm'][:,i1,i2,:].T
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

Sounds familiar. I have this with numpy 1.13.1 and 1.14

Thanks

amico.core.setup() error

I was having trouble installing spams, but resolved it via the solution provided for issue #68.

However, when I do:

amico.core.setup()

I get this error:

AttributeError: 'module' object has no attribute 'core'

How can I resolve this error?

Thank you!

Updates to ___version___ in the setup.py

Could you please use versioning with your bug fixes and updates? AMICO is still listed as 1.0 despite many commits since I last used the software. It is important for citing AMICO in manuscripts as well as upgrading/maintaining with pip.

Thank you!

load kernels issue

The code worked when I ran it on a single shell image. I got this error when I merged multiple shell images into a single merged image. I ideas where things might have gone wrong?

      1 ae.set_model("NODDI")
      2 ae.generate_kernels()
----> 3 ae.load_kernels()
      4 ae.fit()
      5 ae.save_results()

~/programs/python-packages/amico3/amico/core.py in load_kernels(self)
    252 
    253         # Dispatch to the right handler for each model
--> 254         self.KERNELS = self.model.resample( self.get_config('ATOMS_path'), idx_OUT, Ylm_OUT, self.get_config('doMergeB0') )
    255 
    256         print('   [ %.1f seconds ]' % ( time.time() - tic ))

~/programs/python-packages/amico3/amico/models.py in resample(self, in_path, idx_out, Ylm_out, doMergeB0)
    554                 lm = np.load( pjoin( in_path, 'A_%03d.npy'%progress.i ) )
    555                 idx = progress.i - 1
--> 556                 KERNELS['wm'][idx,:,:,:] = amico.lut.resample_kernel( lm, self.scheme.nS, idx_out, Ylm_out, False )[:,:,merge_idx]
    557                 KERNELS['kappa'][idx] = 1.0 / np.tan( self.IC_ODs[i]*np.pi/2.0 )
    558                 KERNELS['icvf'][idx]  = self.IC_VFs[j]

~/programs/python-packages/amico3/amico/lut.py in resample_kernel(KRlm, nS, idx_out, Ylm_out, is_isotropic)
    199         for ox in range(181) :
    200             for oy in range(181) :
--> 201                 KR[ox,oy,idx_out] = np.dot( Ylm_out, KRlm[ox,oy,:] ).astype(np.float32)
    202     else :
    203         KR = np.ones( nS, dtype=np.float32 )

ValueError: shapes (99,273) and (91,) not aligned: 273 (dim 1) != 91 (dim 0)

This is the information of the merged file:


-> Loading data:
	* DWI signal...
		- dim    = 96 x 96 x 60 x 109
		- pixdim = 2.500 x 2.500 x 2.500
	* Acquisition scheme...
		- 109 samples, 3 shells
		- 10 @ b=0 , 60 @ b=2400.0 , 9 @ b=300.0 , 30 @ b=800.0 
	* Binary mask...
		- dim    = 96 x 96 x 60
		- pixdim = 2.500 x 2.500 x 2.500
		- voxels = 109392

-> Preprocessing:
	* Normalizing to b0... [ min=0.00,  mean=0.65, max=40.00 ]
	* Keeping all b0 volume(s)...
   [ 1.5 seconds ]

Temporary Solution to SPAMS Multi-Threading Issue Affecting AMICO Computing Time.

As discussed in both issues #17 and #55, SPAMS multi-threading issue will use more cores while increasing the total time of processing. For example, my experience on a 1.75 mm isotropic human brain gave the following numbers to compute NODDI with AMICO:

  • 1 thread: 6m 33s;
  • 2 threads: 6m 47s;
  • 3 threads: 07m 09s;
  • default (all of my i7 8 threads): about 15m.

Here's how I managed to control the number of cores by adding a couple of code lines to my script:

import os
import amico
...
nb_threads = 1
os.environ["OMP_NUM_THREADS"] = str(nb_threads)  # Has impact on a supercomputer
os.environ["MKL_NUM_THREADS"] = str(nb_threads)  # Has impact both on supercomputer and personal computer if you are using MKL

amico_eval = amico.Evaluation(data_dir, subject, output_path)
...  # Set models and parameters
solver_params = amico_eval.get_config('solver_params')
solver_params['numThreads'] = nb_threads
amico_eval.set_config('solver_params', solver_params)

amico_eval.generate_kernels()
amico_eval.load_kernels()
amico_eval.fit()

I think this is going to be necessary until SPAMS solve their issue, and then maybe we'll have even better computing times!

Thanks to @daducci and Félix Morency who helped me with this issue!

Reduce number of cores

Hi!

How can I reduce the number of CPUs used for AMICO (python version) ?

Thank you
Arnaud

[HELP WANTED] ActiveAx with 2D qspace sampling

Hi,

I'm from NeuroPoly (University Polytechnique of Montreal).
I'm doing spinal cord microstructure parameters estimation.

I would like to use AMICO with ActiveAx. However my qspace is 2D, all diffusion perpendicular to the fibers. Is it possible to :
(1) fix the orientations theta/phi
OR
(2) specify my own modelfit function (that handles 2D qspace) written in MATLAB

Best regards
Tom Mingasson

SPAMS is now available on PyPI

Hello, it is not really an issue but an information that could facilitate the installation of your library. SPAMS is now available on the official PyPI repository (and not just on the test server).
Best

Error : Problems generating the signal with datasynth

Hi, Alessandro,

Thank you for this tool! It's wonderful.
I ran into the error of "Problems generating the signal with datasynth" when I processed the tutorial data. The error was exactly the same as SophieSebille described in her post. The scheme file and the binary mask were downloaded from the website (https://github.com/daducci/AMICO/tree/master/matlab/doc/demos/ActiveAx). The path to java was added successfully. We are running matlab 2016a. I tried to run "datasynth -synthmodel compartment 1 CYLINDERGPD ..." in the command line, there was no error but also no output.
Could you help us to resolve this issue?
FYI, I performed NODDI with the software without any problem before.
Thank you!

Hu

Problems generating the signal with datasynth

Hi,
I am trying to run ActiveAx tutorial and getting following error:

Error using AMICO_ACTIVEAX/GenerateKernels (line 69)
[AMICO_ACTIVEAX.GenerateKernels] Problems generating the signal with datasynth

Error in AMICO_GenerateKernels (line 72) CONFIG.model.GenerateKernels( ATOMS_path, schemeHR, AUX, idx_IN, idx_OUT );

Otherwise, I am using CAMINO and generating synthetic data with 'datasynth' works fine. I am using Matlab on Linux Redhat. Probably it is related to calling CAMINO commands from MATLAB.
Regards

fsl2scheme delimiter bug

Hi,
I am using fsl2scheme to create the scheme file, but I found that when use np.loadtxt, it is better not to define the delimiter like you did in the code(delimiter is optional with kwargs):

if kwargs:
delimiter = kwargs.get('delimiter')
# load files and check size
bvecs = np.loadtxt( bvecsFilename, delimiter=delimiter)
bvals = np.loadtxt( bvalsFilename, delimiter=delimiter)

if we do not define delimiter in the kwargs, this will give us error, cuz in the source code, delimiter is not optional, it is a must, so just not to use this parameters:

# if kwargs:
#     delimiter = kwargs.get('delimiter') ### changed by Hao, actually, this should not be used 
for reading bvecs and bvals

# load files and check size
bvecs = np.loadtxt( bvecsFilename)
bvals = np.loadtxt( bvalsFilename)

I do not know why you added this, cuz I think before you did not have this unless you did in on purpose.

Thanks in advance

Hao

Ex-vivo imaging

Dear Daducci et al.,

Firstly, thank you for your excellent contribution.

Is it possible to implement the NODDI model 'WatsonSHStickTortIsoVIsoDot_B0' which includes an isotropically restricted compartment for ex-vivo imaging?

Regards,

David

Feature request: Progress bar

Congratulations on the fantastic toolbox. I am amazed by the speed of model fitting for NODDI and, considering that the results can be ready in a short time, I find myself staring at the monitor until I see the [DONE] flag. I have put a text-only progress indicator in AMICO_NODDI.m, but of course subsequent git pull requests overwrite my changes.

May I suggest dispstat for this purpose:
(http://www.mathworks.com/matlabcentral/fileexchange/44673-overwritable-message-outputs-to-commandline-window)

but really, any type of text-based progress indicator would do the job.

Thanks again for the great tool.

Docker / singularity containers

Just to let people know, I have created scripts for creating Docker and Singularity containers with AMICO and dependencies.

Scripts are hosted here.

If the docker engine is installed:
docker pull maladmin/amico:latest will download the docker container.

Obviously I'm open to any suggestions for improvements.

Small variation in b-values interpreted as unique shells

Hi,

I'm trying to run NODDI on some data acquired using a Human Connectome Project Lifespan protocol. It has three shells at b=(5, 1500, 3000) s / mm^2. The bvals file output from the HCP pre-processing pipelines contain the estimated actual b-values that were applied. These vary by +/- 15 s / mm^2 from the nominal value.

So when the scanner might be told to produce measurements at b=

5
1500
3000
1500
3000

You might get b=

5
1495
3010
1500
2995

I've set all the b=5 values to 0, otherwise NODDI fails. My question is about the other values. Should I be concerned about this when running NODDI within AMICO? When I run, it says I have 14 shells.

Also, the fluctuation varies between scans, so a b-value of 3000 in one might be 2995 or 3005 in another scan. Would it be advisable to set the b-values to their nominal values?

Thanks

error with ae.fit

Hello,

I'm having issues running ae.fit(). Here's the output:

"-> Fitting "NODDI" model to 207694 voxels:
[ ] 0.0%---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
in ()
----> 1 NODDI_fxn("1_002")

/N/dc2/projects/lifebid/Concussion/concussion_test/bin/NODDI_fxn.py in NODDI_fxn(subject)
9 ae.generate_kernels( regenerate = True )
10 ae.load_kernels()
---> 11 ae.fit()
12 ae.save_results()
13 print("NODDI model generation complete")

/N/dc2/projects/lifebid/code/bacaron/AMICO/amico/core.pyc in fit(self)
325
326 # dispatch to the right handler for each model
--> 327 MAPs[ix,iy,iz,:], DIRs[ix,iy,iz,:], x, A = self.model.fit( y, dirs.reshape(-1,3), self.KERNELS, self.get_config('solver_params') )
328
329 # compute fitting error

/N/dc2/projects/lifebid/code/bacaron/AMICO/amico/models.pyc in fit(self, y, dirs, KERNELS, params)
579 i1, i2 = amico.lut.dir_TO_lut_idx( dirs[0] )
580 A = np.ones( (len(y), nATOMS), dtype=np.float64, order='F' )
--> 581 A[:,:nWM] = KERNELS['wm'][:,i1,i2,:].T
582 A[:,-1] = KERNELS['iso']
583

IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices"

Not exactly sure what the issue is. Any help would be greatly appreciated!

Thank you,
Brad Caron

error when import amico

hello! I met this problem when I tried to import amico, and don't know the reason. Could you tell where this module named 'core' should be? I found several folder but none of them contained 'Evaluation'.

$ python
Python 3.6.1 |Anaconda 4.4.0 (x86_64)| (default, May 11 2017, 13:04:09)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

import amico
Traceback (most recent call last):
File "", line 1, in
File "/.../anaconda/lib/python3.6/site-packages/amico/init.py", line 1, in
from core import Evaluation
ModuleNotFoundError: No module named 'core'

thank you!

ImportError: No module named _spams_wrap

Hi,
when i import spams,i got this error,can you tell me where going wrong?
ImportError Traceback (most recent call last)
in ()
----> 1 import spams

/home/liuna/src/spams-2.6.1/spams.py in ()
7 import six.moves
8
----> 9 import spams_wrap
10 import numpy as np
11 import scipy.sparse as ssp

/home/liuna/src/spams-2.6.1/spams_wrap.py in ()
22 except ImportError:
23 return importlib.import_module('_spams_wrap')
---> 24 _spams_wrap = swig_import_helper()
25 del swig_import_helper
26 elif _swig_python_version_info >= (2, 6, 0):

/home/liuna/src/spams-2.6.1/spams_wrap.py in swig_import_helper()
21 return importlib.import_module(mname)
22 except ImportError:
---> 23 return importlib.import_module('_spams_wrap')
24 _spams_wrap = swig_import_helper()
25 del swig_import_helper

/usr/lib/python2.7/importlib/init.pyc in import_module(name, package)
35 level += 1
36 name = _resolve_name(name[level:], package, level)
---> 37 import(name)
38 return sys.modules[name]

ImportError: No module named _spams_wrap

thx

ImportError: No module named amico

Hi,

I used conda and created a virtual environment(python2.7), and installed dipy, spams, camino and amico. I had not got any errors during installation.
When trying to import amico, I got this error:

(dipy-amico) /amico$ python
Python 2.7.15 |Anaconda, Inc.| (default, Oct 23 2018, 18:31:10)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import amico
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named amico

I also tried to add the amico path to the pythonpath, but it didn't work.
I examined the site-package directory and found a directory named "amico-1.0.dist-info", but there was no .py files in it. Does this mean the installation failed?
what should I do?

Thanks!

Improper NODDI and ActiveAx fitting

When I run NODDI and ActiveAx on one computer, I get images that are completely wrong. For example, with the original NODDI file, ODI produces 0.3 for most voxels except the CSF. I traced the problem to the generate_kernels(). When I run the program with kernels generated from python 2.7.15 and anaconda2-5.1.0 on a different computer, I get good results.

I use anaconda2-5.1.0. I have reinstalled python-spams, amico, camino but nothing has worked. I don't think the problem is from the python version because it used to work, but I am wondering if there is a setting in one of these programs that affects the output of ae.generate_kernels()

Thank you for any help you can offer

Computing bundle specific axonal diameter

Axonal diameter is computed as a = 1E6 * 2.0 * np.dot(self.Rs,xIC) / ( f1 + 1e-16 ), which normalises by the intracellular volume fraction.

Bundle specific diameter should be normalised by the peak's sum of IC weights, which would be obtained by setting :
xIC = x[:nD*n1].reshape(-1,n1)
a = np.dot(Rs,xIC.T)/(xIC.sum(axis=1) + 1e-16)

AMICO fitting time

Dear Dr Daducci:

I'm a Japanese radiologist trying to start NODDI research using AMICO (matlab version) approach. I'm now working on the sample data set provided at NODDI fitting tutorial. Following the instructions, I was successful in generating the final maps, but time for fitting NODDI model (the last step) is much longer than expected (2 min 15 sec vs. expected 15 sec). Could you please give me any suggestion? I'm using a 12-core workstation.
Thank you in advance.

AMICO Maps histograms general question- Not an issue

Hello Dr. Daducci,

This is not an issue but general question in regard to histograms for the AMICO maps that are generated. There are spikes showing up at certain intervals for FICVF and OD histograms as attached. When we threshold the image for these values the data seem to be spread across the brain.

Can you please advice if this is expected.

Thanks,
Prasanna

amico_histograms

Computation of Kernels

Hello,

We have a question related to computing Kernels. In the documentation it is indicated that based on the data structure set up, kernels will be computed only once per each study to save time. However, if we have a scenario where different scans in the same project might be run with different protocols, does the software automatically detect the change in protocol and recompute the kernels if needed.

Appreciate your assistance.

Thanks,
Prasanna

value discretization

Has anyone had any luck with tracking this discretization issue down and/or determined if there is a way to solve it with post-processing? I included my original post in @goals2008 thread. There was mention that @ejcanalesr might have a fix. I am happy to help out and/or test.

Cheers,
Chris

--- original post with linked images --
I have run into the same issue with discretization in the ICVF and ODI (running the python version of AMICO). Has there been any progress on optimal paramteres or the method by @ejcanalesr?
Chris
Screenshot of AMICO calculated OD versus that calculated by the original (non-AMICO) NODDI matlab toolbox. The discretizations appear at values very close to .19,.29,.39...
screenshot from 2016-04-06 20 23 42
https://cloud.githubusercontent.com/assets/2734273/14360361/fe956950-fcc3-11e5-8e52-012a451fa7eb.png

same comparison, with mean gaussian smoothing of the AMICO OD ouptut, kernel size from left to right = 0.75, 1, 2mm (data resolution = 2x2x2)

screenshot from 2016-04-07 13 40 22
https://cloud.githubusercontent.com/assets/2734273/14360997/db57131e-fcc6-11e5-8475-1c6277c708b0.png

development ceased?

Dear Developers,

On behalf of one of our users I would like to ask:

  • has the development ceased? (There are no recent commits and issues as well as PRs seem not to be addressed? )
  • in order to support a given software on a cluster, there needs to be a release one can refer to (compare https://easybuild.readthedocs.io/en/latest/ ), would you be willing to either host amico at the Python Package index and/or create a github release?

Best regards,
Christian Meesters

Dimension mismatch error when isExvivo set to True

Hello,

I am using the Python version of the AMICO toolbox to run NODDI on ex vivo images. When I set the isExvivo flag to True and try to perform the fitting step, I encounter the following error (in core.py):

line 336, in fit
MAPs[ix,iy,iz,:], DIRs[ix,iy,iz,:], x, A = self.model.fit( y, dirs.reshape(-1,3), self.KERNELS, self.get_config('solver_params') )
ValueError: cannot copy sequence with size 4 to array axis with dimension 3

From my interpretation, 'MAPs' is being assigned a 4 element array when it's size for each slice is 3 dimensions. Upon looking a bit closer, in my case at least I think that this is due to the fact that for ex vivo, the extra 'dot' map is being output in models.py, giving 4 total outputs being returned to the fitting function:

if self.isExvivo:
            return [v, od, fISO, xx[-2]], dirs, x, A

Am I doing something wrong, or perhaps I don't have the correct version installed?
If I am not interested in the 'dot' map for the time being and I delete the 4th element in the array in the above line to circumvent this error will this disrupt how the ex vivo model is implemented in the rest of the code?

Thanks very much.

Black output MATLAB

Hi, I am a new user of AMICO for analysis of NODDI data that I am acquiring. As far as I can see, I have successfully installed all dependencies and the whole analysis pipeline runs smoothly in MATLAB with no errors. However, when I look at the output, only the *_DIR image has any content, all the others are completely black with 0s everywhere. I get that there is some dependency issue going on here but since I don't get any errors during the fitting, I am unsure of where to start troubleshooting.

Do you have any suggestions for how to troubleshoot this? Can I verify the scripts from the SPAMS installation in some way perhaps? I have tried my own data and the sample data set provided here but with no success so far.

All help is greatly appreciated. Thanks a lot for making this tool available open source and putting up such a great tutorial, it is very appreciated!

Best
Emil Ljungberg
Medical Physics, University of British Columbia.

Strange output

Dear Daucci:

I am a radiologist in China, and I am now using NODDI toolbox for my research.

I tried your AMICO on the demo data, everything is OK. When I used it for my own data, the output file of *_dir, *_OD are correct, but *_ICVF is very strange, the value of every voxel is 0.99.
fit_icvf

And I processed the data again by original NODDI toolbox, the result is correct now.

So would you help me to find out what is wrong in it

Thanks

Weijun Tang

error in fit() with numpy 1.10

After upgrading numpy from 1.9.3 to 1.10, I got an error as follows:

In [8]: ae.fit()

-> Fitting "NODDI" model to 290353 voxels:
   [                                                  ]   0.0%---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-8-fad37d61652c> in <module>()
----> 1 ae.fit()

/Users/takahashi/anaconda/envs/amico/lib/python2.7/site-packages/amico/core.pyc in fit(self)
    298
    299                     # dispatch to the right handler for each model
--> 300                     MAPs[ix,iy,iz,:], DIRs[ix,iy,iz,:], x, A = self.model.fit( y, dirs.reshape(-1,3), self.KERNELS, self.get_config('solver_params') )
    301
    302                     # compute fitting error

/Users/takahashi/anaconda/envs/amico/lib/python2.7/site-packages/amico/models.pyc in fit(self, y, dirs, KERNELS, params)
    557         An = A[ self.scheme.dwi_idx, :-1 ] * KERNELS['norms']
    558         yy = yy[ self.scheme.dwi_idx ].reshape(-1,1)
--> 559         x = spams.lasso( np.asfortranarray(yy), D=np.asfortranarray(An), **params ).todense().A1
    560
    561         # debias coefficients

/Users/takahashi/anaconda/envs/amico/lib/python2.7/site-packages/spams.pyc in lasso(X, D, Q, q, return_reg_path, L, lambda1, lambda2, mode, pos, ols, numThreads, max_length_path, verbose, cholesky)
    437             ((indptr,indices,data,shape),path) = spams_wrap.lassoD(X,D,return_reg_path,L,lambda1,lambda2,mode,pos,ols,numThreads,max_length_path,verbose,cholesky)
    438         else:
--> 439             (indptr,indices,data,shape) = spams_wrap.lassoD(X,D,return_reg_path,L,lambda1,lambda2,mode,pos,ols,numThreads,max_length_path,verbose,cholesky)
    440     alpha = ssp.csc_matrix((data,indices,indptr),shape)
    441     if return_reg_path:

/Users/takahashi/anaconda/envs/amico/lib/python2.7/site-packages/spams_wrap.pyc in lassoD(*args)
    199         bool cholevsky) -> SpMatrix<(float)>
    200     """
--> 201   return _spams_wrap.lassoD(*args)
    202
    203 def lassoQq(*args):

RuntimeError: matrix arg 1 must be a 2d double Fortran Array

Fit() error when NODDI's isExvivo flag set to True

When processing NODDI's demo data with the isExVivo flag set to True, NODDI's fit() method crashes at line 553 with the message:
ValueError: could not broadcast input array from shape (81,144) into shape (81,145)

help on ae.fit()

Hello,
With ubuntu 16.04, I wanted to install Amico.
After a complete installation
I get this error in use
"
Ae.fit ()
File "/usr/local/lib/python2.7/dist-packages/amico/core.py", line 300, in fit
MAPs [ix, iy, iz ,:], DIRs [ix, iy, iz ,:], x, A = self.model.fit (y, dirs.reshape (-1,3), self.KERNELS, self. Get_config ('solver_params'))
File "/usr/local/lib/python2.7/dist-packages/amico/models.py", line 553, in fit
A [:,: - 1] = KERNELS ['wm'] [:, i1, i2,:] T
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices
"
can you help me ?
thank you

Gérard

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.