Giter Club home page Giter Club logo

intensity-normalization's Introduction

intensity-normalization

image

image

image

image

This package contains various methods to normalize the intensity of various modalities of magnetic resonance (MR) images, e.g., T1-weighted (T1-w), T2-weighted (T2-w), FLuid-Attenuated Inversion Recovery (FLAIR), and Proton Density-weighted (PD-w).

The basic functionality of this package can be summarized in the following image:

image

where the left-hand side are the histograms of the intensities for a set of unnormalized images (from the same scanner with the same protocol!) and the right-hand side are the histograms after (FCM) normalization.

We used this package to explore the impact of intensity normalization on a synthesis task (pre-print available here).

Note that while this release was carefully inspected, there may be bugs. Please submit an issue if you encounter a problem.

Methods

We implement the following normalization methods (the names of the corresponding command-line interfaces are to the right in parentheses):

Individual time-point normalization methods

  • Z-score normalization (zscore-normalize)
  • Fuzzy C-means (FCM)-based tissue-based mean normalization (fcm-normalize)
  • Kernel Density Estimate (KDE) WM mode normalization (kde-normalize)
  • WhiteStripe1 (ws-normalize)

Sample-based normalization methods

  • Least squares (LSQ) tissue mean normalization (lsq-normalize)
  • Piecewise Linear Histogram Matching (Nyúl & Udupa)23 (nyul-normalize)
  • RAVEL4 (ravel-normalize)

Individual image-based methods normalize images based on one time-point of one subject.

Sample-based methods normalize images based on a set of images of (usually) multiple subjects of the same modality.

Recommendation on where to start: If you are unsure which one to choose for your application, try FCM-based WM-based normalization (assuming you have access to a T1-w image for all the time-points). If you are getting odd results in non-WM tissues, try least squares tissue normalization (which minimizes the least squares distance between CSF, GM, and WM tissue means within a set).

Read about the methods and how they work. If you have a non-standard modality, e.g., a contrast-enhanced image, read about how the methods work and determine which method would work for your use case. Make sure you plot the foreground intensities (with the -p option in the CLI or the HistogramPlotter in the Python API) to validate the normalization results.

All algorithms except Z-score (zscore-normalize) and the Piecewise Linear Histogram Matching (nyul-normalize) are specific to images of the brain.

Motivation

Intensity normalization is an important pre-processing step in many image processing applications regarding MR images since MR images have an inconsistent intensity scale across (and within) sites and scanners due to, e.g.,:

  1. the use of different equipment,
  2. different pulse sequences and scan parameters,
  3. and a different environment in which the machine is located.

Importantly, the inconsistency in intensities isn't a feature of the data (unless you want to classify the scanner/site from which an image came)—it's an artifact of the acquisition process. The inconsistency causes a problem with machine learning-based image processing methods, which usually assume the data was gathered iid from some distribution.

Install

The easiest way to install the package is through the following command:

pip install intensity-normalization

To install from the source directory, clone the repo and run:

python setup.py install

Note the package antspy is required for the RAVEL normalization routine, the preprocessing tool as well as the co-registration tool, but all other normalization and processing tools work without it. To install the antspy package along with the RAVEL, preprocessing, and co-registration CLI, install with:

pip install "intensity-normalization[ants]"

Basic Usage

See the 5 minute overview for a more detailed tutorial.

In addition to the above small tutorial, here is consolidated documentation.

Call any executable script with the -h flag to see more detailed instructions about the proper call.

Note that brain masks (or already skull-stripped images) are required for most of the normalization methods. The brain masks do not need to be perfect, but each mask needs to remove most of the tissue outside the brain. Assuming you have T1-w images for each subject, an easy and robust method for skull-stripping is ROBEX5.

If the images are already skull-stripped, you don't need to provide a brain mask. The foreground will be automatically estimated and used.

You can install ROBEX—and get python bindings for it at the same time–with the package pyrobex (installable via pip install pyrobex).

Individual time-point normalization methods

Example call to a individual time-point normalization CLI:

fcm-normalize t1w_image.nii -m brain_mask.nii

Sample-based normalization methods

Example call to a sample-based normalization CLI:

nyul-normalize images/ -m masks/ -o nyul_normalized/ -v

where images/ is a directory full of N MR images and masks/ is a directory full of N corresponding brain masks, nyul_normalized is the output directory for the normalized images, and -v controls the verbosity of the output.

The command line interface is standard across all sampled-based normalization routines (i.e., you should be able to run all sample-based normalization routines with the same call as in the above example); however, each has unique method-specific options.

Potential Pitfalls

  1. This package was developed to process adult human MR images; neonatal, pediatric, and animal MR images should also work but—if the data has different proportions of tissues or differences in relative intensity among tissue types compared with adults—the normalization may fail. The nyul-normalize method, in particular, will fail hard if you train it on adult data and test it on non-adult data (or vice versa). Please open an issue if you encounter a problem with the package when normalizing non-adult human data.
  2. When we refer to any specific modality, it is referring to a non-contrast version unless otherwise stated. Using a contrast image as input to a method that assumes non-contrast will produce suboptimal results. One potential way to normalize contrast images with this package is to 1) find a tissue that is not affected by the contrast (e.g., grey matter) and normalize based on some summary statistic of that (where the tissue mask was found on a non-contrast image); 2) use a simplistic (but non-robust) method like Z-score normalization.

    Read about the methods and how they work to decide which method would work best for your contrast-enhanced images.

Contributing

Help wanted! See CONTRIBUTING.rst for details and/or reach out to me if you'd like to contribute. Credit will be given! If you want to add a method, I'll be happy to add your reference to the citation section below.

Test Package

Unit tests can be run from the main directory as follows:

pytest tests

Citation

If you use the intensity-normalization package in an academic paper, please cite the corresponding paper:

@inproceedings{reinhold2019evaluating,
  title={Evaluating the impact of intensity normalization on {MR} image synthesis},
  author={Reinhold, Jacob C and Dewey, Blake E and Carass, Aaron and Prince, Jerry L},
  booktitle={Medical Imaging 2019: Image Processing},
  volume={10949},
  pages={109493H},
  year={2019},
  organization={International Society for Optics and Photonics}}

References


  1. R. T. Shinohara, E. M. Sweeney, J. Goldsmith, N. Shiee, F. J. Mateen, P. A. Calabresi, S. Jarso, D. L. Pham, D. S. Reich, and C. M. Crainiceanu, “Statistical normalization techniques for magnetic resonance imaging,” NeuroImage Clin., vol. 6, pp. 9–19, 2014.

  2. N. Laszlo G and J. K. Udupa, “On Standardizing the MR Image Intensity Scale,” Magn. Reson. Med., vol. 42, pp. 1072–1081, 1999.

  3. M. Shah, Y. Xiao, N. Subbanna, S. Francis, D. L. Arnold, D. L. Collins, and T. Arbel, “Evaluating intensity normalization on MRIs of human brain with multiple sclerosis,” Med. Image Anal., vol. 15, no. 2, pp. 267–282, 2011.

  4. J. P. Fortin, E. M. Sweeney, J. Muschelli, C. M. Crainiceanu, and R. T. Shinohara, “Removing inter-subject technical variability in magnetic resonance imaging studies,” NeuroImage, vol. 132, pp. 198–212, 2016.

  5. Iglesias, Juan Eugenio, Cheng-Yi Liu, Paul M. Thompson, and Zhuowen Tu. "Robust brain extraction across datasets and comparison with publicly available methods." IEEE transactions on medical imaging 30, no. 9 (2011): 1617-1634.

intensity-normalization's People

Contributors

dependabot[bot] avatar jcreinhold avatar kaczmarj avatar sarthakpati avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

intensity-normalization's Issues

Confusing import errors for some CLI tools when ANTsPy isn't installed

🚀 Feature

Motivation

E.g.,

$ coregister
Traceback (most recent call last):
  File "/nix/store/nc01dsawmxfxj4bdyd2l10ib4mm5n6ds-python3.9-intensity-normalization-2.0.2/bin/.coregister-wrapped", line 6, in <module>
    from intensity_normalization.cli import register_main
ImportError: cannot import name 'register_main' from 'intensity_normalization.cli' (/nix/store/nc01dsawmxfxj4bdyd2l10ib4mm5n6ds-python3.9-intensity-normalization-2.0.2/lib/python3.9/site-packages/intensity_normalization/cli.py)

Pitch

Better to detect this situation and print a more helpful message.

Alternatives

Additional context

Report bug

Hello,

First of all, I congratulate you for this nice tool.

I have a few comments to make :
In the readocs you indicate the command line with an "_" but it's an "-"
ex : zscore_normalize -> zscore-normalize

For the hm method if i use my image not skull-stripped with a mask and my image skull-stripped without mask, i have some difference.

Also, it will be nice, if we can disable the log for the count =)

For me the Ravel method don't work, it run for 2 of my image and then, i got this error :

**### 2018-10-09 11:07:06,506 - intensity_normalization.exec.ravel_normalize - ERROR - expected str, bytes or os.PathLike object, not NoneType

Traceback (most recent call last):
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.0.0-py3.6.egg/intensity_normalization/exec/ravel_normalize.py", line 77, in main
mask_fns = io.glob_nii(args.mask_dir)
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.0.0-py3.6.egg/intensity_normalization/utilities/io.py", line 48, in glob_nii
fns = sorted(glob(os.path.join(dir, '.nii')))
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/posixpath.py", line 80, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
(intensity_normalization) h@h-AERO-15XV8:/data/PythonProjects/standardization$ ravel-normalize -i /media/h/work_data/Patient_data_ST_ANNE/NIFTI_normalization/1_5T_FLAIR/img -o /media/h/work_data/Patient_data_ST_ANNE/NIFTI_normalization/1_5T_FLAIR/ravel -m /media/h/work_data/Patient_data_ST_ANNE/NIFTI_normalization/1_5T_FLAIR/mask -c flair -v -p
2018-10-09 11:07:54,286 - intensity_normalization.exec.ravel_normalize - INFO - Normalizing the images according to RAVEL
2018-10-09 11:07:54,648 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image XX_1_5T_FLAIR_1mm_register (1/22)
2018-10-09 11:07:55,599 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image XX_1_5T_FLAIR_1mm_register (1/22)
2018-10-09 11:09:06,540 - intensity_normalization.normalize.ravel - INFO - Applying WhiteStripe to image YY_1_5T_FLAIR_1mm_register (2/22)
2018-10-09 11:09:09,474 - intensity_normalization.normalize.ravel - INFO - Starting registration for image YY_1_5T_FLAIR_1mm_register (2/22)
2018-10-09 11:10:00,504 - intensity_normalization.normalize.ravel - INFO - Creating control mask for image YY_1_5T_FLAIR_1mm_register (2/22)
/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/ants/segmentation/atropos.py:141: UserWarning:

ERROR: Non-zero exit status!

2018-10-09 11:10:04,841 - intensity_normalization.exec.ravel_normalize - ERROR - list index out of range
Traceback (most recent call last):
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.0.0-py3.6.egg/intensity_normalization/exec/ravel_normalize.py", line 86, in main
use_fcm=args.use_fcm)
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.0.0-py3.6.egg/intensity_normalization/normalize/ravel.py", line 90, in ravel_normalize
use_fcm=use_fcm)
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.0.0-py3.6.egg/intensity_normalization/normalize/ravel.py", line 204, in image_matrix
mrf=smoothness, use_fcm=use_fcm))
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/intensity_normalization-1.0.0-py3.6.egg/intensity_normalization/utilities/csf.py", line 52, in csf_mask
res = img.kmeans_segmentation(3, kmask=brain_mask, mrf=mrf)
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/ants/segmentation/kmeans.py", line 47, in kmeans_segmentation
kmimage = atropos(a = kmimage, m = mrf, c = '[5,0]', i = 'kmeans[%s]'%(str(k)), x = kmask)
File "/home/h/Applications/anaconda3/envs/intensity_normalization/lib/python3.6/site-packages/ants/segmentation/atropos.py", line 144, in atropos
probimgs = [iio2.image_read(probsout[0])]
IndexError: list index out of range**

Best regards,

"TypeError: Axis must be specified when shapes of a and weights differ." in the LSQ method

🐛 Bug

My data are 3D brain MRI with large slicethickness (~5mm) and slicegap (~5mm) in .nii.gz format. First, I used HD-BET algorithm to get brain masks, then normalised the images using the LSQ method. But the code ran into the following error.

Code:

t1_image_paths = glob(root + r'/*_{}.nii.gz'.format(t1_modality))
t1_image_paths = sorted(t1_image_paths)
t1_images = [nib.load(t1_image_path) for t1_image_path in t1_image_paths]
t1_images_array = [t1_image.get_fdata() for t1_image in t1_images]
mask_paths = glob(mask_root + r'/*_{}_stripped_mask.nii.gz'.format(mask_modality))
mask_paths = sorted(mask_paths)
masks = [nib.load(mask_path) for mask_path in mask_paths]
masks_array = [mask.get_fdata() for mask in masks]
lsq_norm = LeastSquaresNormalize(norm_value=100)
lsq_norm.fit(t1_images, masks, modality='t1')
Traceback (most recent call last):
  File "/home/chenhaolin/Documents/normalized/normalize.py", line 286, in <module>
    lsq_norm.fit(t1_images, masks, modality='t1')
  File "/home/chenhaolin/Documents/py_environments/chl3.9/lib/python3.9/site-packages/intensity_normalization/normalize/base.py", line 389, in fit
    self._fit(images, masks, modality=modality, **kwargs)
  File "/home/chenhaolin/Documents/py_environments/chl3.9/lib/python3.9/site-packages/intensity_normalization/normalize/lsq.py", line 97, in _fit
    csf_mean = np.average(image, weights=tissue_membership[..., 0])
  File "<__array_function__ internals>", line 180, in average
  File "/home/chenhaolin/Documents/py_environments/chl3.9/lib/python3.9/site-packages/numpy/lib/function_base.py", line 508, in average
    raise TypeError(
TypeError: Axis must be specified when shapes of a and weights differ.

Environment

  • intensity-normalization version (2.2.3):
  • numpy version (1.22.3):
  • OS (Linux 18.04.6 LTS (GNU/Linux 5.4.0-107-generic x86_64)):
  • How you installed intensity-normalization (via pip):
  • Python version: 3.9.7
  • Any other relevant information: None

Refactor

🚀 Feature

Current code is a jumbled mess—it is hard to debug and hard to use as an importable module. Refactor for clarity.

Motivation

Current importable functions don't make sense and are incosistent; some take a directory of images instead of an individual image—this is bad. Some of the CLIs are a jumble of hacked-together code (e.g., the ones that have a single-image option).

Pitch

Refactor code to improve importable functionality and clean up CLIs with single-img option.

Limit dependency on antspy

The antspy package is awesome, but the installation process is not smooth and it seems to take a long time to load (>15 seconds).

antspy is required for RAVEL and potentially the preprocessing step, but can probably be removed from all other modules.

Clean up interfaces

The command line interfaces are not extremely clear. Need to simplify/refactor.

fcn-normalize command line problem

🐛 Bug

Any arguments beyond T1 in the command line option -c are not accepted for fcn-normalize

See below

Environment

the latest version on Linux.
I did both, installes the binaries and buildt it

Additional context

T1 images are going through with fcn_normalize,
T2 and any other contrasts do not.
I use the following command line:
fcm-normalize -v -i testing/pat/pat_T2.nii.gz-o testing/full_fcm_test -mtesting/pat/pat_BrainCerebellumMask.nii.gz -s -c t2 -tt csf
I get the following error: -intensity_normalization.exec.fcm_normalize - ERROR - stat: pathshould be string, bytes, os.PathLike or integer, not NoneType
Is does not matter if I cange the tissue type etc. All worksfine with T1. (which are the defaults)
Any helpwould be apppreciated. Furthermore fcn-normalize does produces a csf mask without the ventricles when using csf as reference?

Any help is appreciated

Add option to specify output file type

🚀 Feature

Add an option in the CLI to specify the output file type (e.g., NIfTI).

Motivation

The current implementation of the CLI saves the output image as the same type as the input; however, a user who inputs DICOM might want NIfTI to be returned, especially because the DICOM won't be saved as single-slice frames (it'll be saved as a multi-slice frame).

make `brain_mask` argument optional

🚀 Feature

make brain_mask argument optional in some commands, e.g. fcm-normalize

Motivation

I've got some partially processed images which are skull-stripped, and I'd like to conduct intensity normalization on them now. Because it's already skull-stripped I think it's reasonable to input solely this image to the program without specifying any brain mask. However the fcm-normalize does not let me do so, complaining I must specify one mask here (BTW if the message here was "One and only one mask should be specified", it would be more precise because operation here is xor).

There should be a simple workaround that I could create a mask covering the whole brain (has "1" everywhere) somewhere on my filesystem, but I think we can make it more elegant by making this argument optional.

Pitch

Make brain_mask optional with the default value of the whole brain's mask.

Alternatives

Add a new argument for explicitly informing that the image was already skull-stripped.

Improve docs

📚 Documentation

While the refactor improved code quality and reduced the need for some documentation, additional documentation needs to be added, perhaps with examples (that always run), via docstrings to the user-facing functions.

See doctest and the pytest extension.

Error in imported nyul_normalize

🐛 Bug

When running the nyul normalization the program crashes

To Reproduce

  1. import intensity_normalization
intensity_normalization.normalize.nyul.nyul_normalize('mydirectory')

line 109, in get_landmarks
    landmarks = np.percentile(img, percs)
  File "<__array_function__ internals>", line 5, in percentile
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/lib/function_base.py", line 3705, in percentile
    return _quantile_unchecked(
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/lib/function_base.py", line 3824, in _quantile_unchecked
    r, k = _ureduce(a, func=_quantile_ureduce_func, q=q, axis=axis, out=out,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/lib/function_base.py", line 3403, in _ureduce
    r = func(a, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/lib/function_base.py", line 3941, in _quantile_ureduce_func
    x1 = take(ap, indices_below, axis=axis) * weights_below
  File "<__array_function__ internals>", line 5, in take
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 194, in take
    return _wrapfunc(a, 'take', indices, axis=axis, out=out, mode=mode)
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 61, in _wrapfunc
    return bound(*args, **kwds)
IndexError: cannot do a non-empty take from an empty axes.

Expected behavior

No error, normalized mri data

Environment

python 3.8

Additional context

I am inputting noskull brains, they are being read correctly (I've checked that)

Path issue in command line execution

I am attempting to execute fcm-normalize in the command window on a Kubuntu 18.04 LTS laptop.

Here is an example of the issue -

fcm-normalize -i t1n2mni.nii.gz -w fast_mni/ft1__pve_2.nii.gz -c t1 -o t1n2mni_fcm.nii.gz --single-img
2019-02-13 11:54:33,707 - intensity_normalization.exec.fcm_normalize - ERROR - stat: path should be string, bytes, os.PathLike or integer, not NoneType
Traceback (most recent call last):
  File "/home/jamie/miniconda3/lib/python3.7/site-packages/intensity_normalization-1.2.1-py3.7.egg/intensity_normalization/exec/fcm_normalize.py", line 124, in main
    if not os.path.isfile(args.image) or not os.path.isfile(args.brain_mask):
  File "/home/jamie/miniconda3/lib/python3.7/genericpath.py", line 30, in isfile
    st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType`

I attempted two more path formats for all the inputs with the same error. The other path formats were (1) ./t1n2mni.nii.gz and (2) /full/path/t1n2mni.nii.gz

Add option to output tissue maps in LSQ normalization

🚀 Feature

Have LSQ normalization routine output and, optionally, use tissue maps for future runs.

Motivation

LSQ normalization relies on a FCM defined tissue probability map to calculate the means of each of three major tissue classes (CSF, GM, and WM). While this is reliable in T1-w images, this is not reliable in T2-w images.

Pitch

Have option to output tissue prob. maps and, optionally, take that same directory of tissue maps as input so that more reliable methods can be used to find the tissue class means.

Alternatives

Improve tissue probability map calculation to be more robust.

Output and input distribution file in Sample-based normalization methods

🚀 Feature

Output density distribution from a set of images into a file, then read this file back in for different sets of images

Motivation

It is very common that we work with different sets of images. For example, in deep learning, we have the training, validation, and testing sets. We can use a sample-based normalization method to find the average density distribution, and normalize the images to this distribution. But how can we normalize the validation and testing sets to this same distribution? An easy way is we output the distribution of the training sets and save it into a file. Then we read this file back in to do the normalization on the validation and testing sets, without having to mixing the images in the validation and testing sets with the images in the training sets.

Pitch

We output the distribution of the training sets and save it into a file. Then we read this file back in to do the normalization on a different set.

Alternatives

Additional context

Out-of-memory error in Nyul for large amounts of data

Hi,

Thank you for your nice package. I have a large set of training data (900 volume) and I was trying to use Nyul normalization (Sample-based) to standardize the intensity variations. However, I am getting memory error issue. I was wondering if you have any insight on how to use this normalization method in a batch-wise manner? or any other solution to tackle the memory error problem?

Best,

antspy need to be installed separately use --user in pip3 install if you don't have sudo authority

antspy need to be installed separately use --user in pip3 install if you don't have sudo authority.

I have installed several times using the --antspy and --1.4, but unsuccessful. I finally use --user to solve this.

another problem is an error occurs
intensity_normalization.exec.preprocess - ERROR - unsupported pixeltype short

I install new version of antspy, problem solved :)

really struggle to install this today.

Supply control CSF mask to ravel-normalize

I'd like to normalize gad-enhanced T1ws. I can create the control CSF mask elsewhere. Is it feasible to add another input field to supply the control mask to ravel-normalize?

Support more image types in CLI (e.g., DICOM)

🚀 Feature

The current setup, even in the refactor, requires images to be NIfTI when using the CLIs. This is unnecessary. torchio could be used to open and save the images which supports reading (and, in a more limited way, saving) DICOM images.

Motivation

Having to convert images to NIfTI to normalize the images is unnecessary with the creation of packages like torchio.

Pitch

Support DICOM and other medical imaging types in the CLIs.

Alternatives

Force users to use the python functions with numpy arrays to normalize non-NIfTI images.

Improve contributing information

📚 Documentation

The information about contributing to this package is lacking and out-of-date. Update it and make it easier to contribute to this package.

How to create the WM mask

Hi Jacob
I have read the tutorial you write, I still cannot figure out how to create the WM mask. Supposed I have the T1-w images in my folder (t1_imgs/). I create an empty folder for WM (brain_masks/) and an empty folder (out/) for normalized output.
when I run:
fcm-normalize -i t1_imgs/ -m brain_masks/ -o out/ -v -c t1
I always run into error as:

Traceback (most recent call last):
.....
FileNotFoundError: [Errno 2] No such file or directory: 'brain_masks/'

Could you tell me how to deal with this. I have the T1-w files and want to use this tool to create the WM mask and normalize my T1-w images ?!

I follow the tutorial as follow:

We use the T1-w image to create a mask of the WM over which we calculate the mean and normalize as previously stated (see here for more detail). This mask can then be used to normalize the remaining contrasts in the set of images for a specific patient assuming that the remaining contrast images are registered to the T1-w image.
Since all the command line interfaces (CLIs) are installed along with the package, we can run fcm-normalize in the terminal to normalize a T1-w image and create a WM mask by running the following command (replacing paths as necessary):

fcm-normalize -i t1_w_image_path.nii.gz -m mask_path.nii.gz -o t1_norm_path.nii.gz -v -c t1 -s

This will output the normalized T1-w image to t1_norm_path.nii.gz and will create a directory called wm_masks in which the WM mask will be saved. You can then input the WM mask back in to the program to normalize an image of a different contrast, e.g. for T2,

`intensity-normalization` not working in Google Colab

Hi,
Thank you first for your effort in this module.

I try to install intensity_normalization using Google colab notebook, but it seems not the whole package was installed.
When I try to import from intensity_normalization.typing, this error appears "No module named 'intensity_normalization.typing' ".

I try manually to add typing.py file to the package folder on my laptop, but unfortunately, I could not find the directory
(/usr/local/lib/python3.7/dist-packages/intensity_normalization/init.py) on my laptop, although the packege is already installed and

It is a weird issue! Anyway, can you help me with that
I hope you can update the package with the typing.py file so I can install it again.
Thank you for your help.

colab-error

Incorrect coregister for label mri

🐛 Bug

I am trying to make coregister to my dataset and use one of BRATS19 image as template. All images registered right but when i tried to register label with it i got it wrong. I think that label after coregister is not right or higher(shift for z axis) than mri image.

To Reproduce

Steps to reproduce the behavior:

  1. I cloned your project from github
  2. I updated coregister function due to different directories structure. See code in attachment
    coregister.txt
    3.Input data: for template BraTS19_2013_1_1 https://drive.google.com/drive/folders/15wsRop7Ha_WCU579oWQ3mYkKpvvFh96o?usp=sharing
    for my dataset mts3 image https://drive.google.com/file/d/1jk8C57mOeyrgmkJwgQ6kWKhvIin3ugqT/view?usp=sharing and label https://drive.google.com/file/d/16NhkMolL4ai0V2wOJvKfsCh6SMS3XNg7/view?usp=sharing

Expected behavior

Before coregister: label matches tumor on image.
3

Images after coregister in MITK Workbench:
red is label and for 1 it is too smal. for 2 there must be no red.
1
2

Environment

  • I don't see any version for intensity-normalization but it is last version from github and coregister function was created on Jun 19, 2018.
  • numpy version 1.18.2.
  • Google Colaboratory.
  • Installed intensity-normalization by git clone.
  • copied and call fuction coregister
  • Python 3.6.9:
  • Ants from pip install antspyx '0.2.5'

Add PyTorch/Tensorflow interface

🚀 Feature

It might be desirable to have some of the normalization schemes available as data augmentation schemes. Add this interface for the relevant schemes.

python functions

📚 Documentation

Hi, where can I find additional information on how to use the package using python functions? Thanks

Is that possible to record the transformation in the co-registration function ?

Hi Jacob
Here, I want to apply the co-registration operation to the corresponding label masks of my data of head images. I have used the co-registration operation to transformed all my sample images to the 'NMI' template, my goal is to do the same transformation on all my corresponding label masks of my data. I don't know if it is possible to do like that in this intensity-normalization tool.
Thanks
Lance

Allow CSV input to CLIs

🚀 Feature

Allow CSV files as input to the normalization CLIs, where there is an image (or several modalities) column to be normalized, a mask column, an output column, etc. This is mostly relevant for sample-based methods (RAVEL, Nyul, and LSQ).

Motivation

The current workflow for sample-based methods requires images to be setup in a directory structure which is often unnatural for how neuroimaging/medical image projects are formatted. A CSV file specifying all the image paths would be an alternative that also makes sure the masks are in proper correspondence with the images.

The CSV file format could also be used to normalize multiple modalities sequentially, e.g., with FCM you could provide the CSV with the T1-w, T2-w, etc. image paths under corresponding columns and the method can automatically find the tissue mask on the T1-w image and use the tissue mask to normalize the other modalities.

Alternatives

Force users to symlink/copy their images into a directory to normalize them.

T2 normalisation

📚 Documentation -- options

I have a slight problem:

T1 images are going through with fcn_normalize,

T2 and any other contrasts do not.

I use the following command line:

fcm-normalize -v -i testing/pat/pat_T2.nii.gz -o testing/full_fcm_test -m testing/pat/pat_BrainCerebellumMask.nii.gz -s -c t2 -tt csf

I get the following error: - intensity_normalization.exec.fcm_normalize - ERROR - stat: path should be string, bytes, os.PathLike or integer, not NoneType

Is does not matter if I cange the tissue type etc. All works fine with T1. (which are the defaults)

I guess I do something wrong. Any help would be apppreciated. By the way, fcn does the csf segmentation, however produces a csf mask without the ventricles?

Any help on the command line is appreciated

Segmentation fault (core dumped)

🐛 Bug

Hi Jacob
I follow your 5min_tutorial (https://github.com/jcreinhold/intensity-normalization/blob/master/tutorials/5min_tutorial.md) . ./create_env.sh to create a virtual environment for the intensity-normalization. when I want to simply process some samples. I run into the error of Segmentation fault. I don't know how does this happen. I can run the tool while ago. It was successful. I have try on my both servers:

in one server, it shows like this:
intensity_normalization.utilities.preprocess - INFO - Preprocessing image: Sample_T1 (1/1)
Segmentation fault (core dumped)

in another server, it shows like this:
intensity_normalization.utilities.preprocess - INFO - Preprocessing image: Sample_T1 (1/1)
Segmentation fault

The strange thing is that I try with my server 2 where I install the package and run successfully. This time it falls with the same error.

Please help. Thanks

To Reproduce

Steps to reproduce the behavior:

  1. install the package using . ./create_env.sh
  2. error: Segmentation fault (core dumped)

Environment

  • intensity-normalization version (1.4.3):
  • numpy version (1.16.5):
  • OS (Linux):
  • How you installed intensity-normalization (conda, . ./create_env.sh):
  • Python version: 3.7.7

Plot error

The following error was observed in the histogram plotting routine when the package is installed via:

pip install git+git://github.com/jcreinhold/intensity-normalization.git
2018-12-04 15:24:58.826 python[21573:1500660] -[NSApplication _setup:]: unrecognized selector sent to instance 0x7f7fa3e930e0
2018-12-04 15:24:58.828 python[21573:1500660] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSApplication _setup:]: unrecognized selector sent to instance 0x7f7fa3e930e0'
*** First throw call stack:
(
	0   CoreFoundation                      0x00007fff32802e65 __exceptionPreprocess + 256
	1   libobjc.A.dylib                     0x00007fff5e859720 objc_exception_throw + 48
	2   CoreFoundation                      0x00007fff3288022d -[NSObject(NSObject) __retain_OA] + 0
	3   CoreFoundation                      0x00007fff327a4820 ___forwarding___ + 1486
	4   CoreFoundation                      0x00007fff327a41c8 _CF_forwarding_prep_0 + 120
	5   libtk8.6.dylib                      0x0000001c20f6231d TkpInit + 413
	6   libtk8.6.dylib                      0x0000001c20eba17e Initialize + 2622
	7   _tkinter.cpython-37m-darwin.so      0x0000001c20ce2a0f _tkinter_create + 1183
	8   python                              0x00000001081a18d6 _PyMethodDef_RawFastCallKeywords + 230
	9   python                              0x00000001082e1b87 call_function + 247
	10  python                              0x00000001082df6ba _PyEval_EvalFrameDefault + 45274
	11  python                              0x00000001082d3332 _PyEval_EvalCodeWithName + 418
	12  python                              0x00000001081a05a7 _PyFunction_FastCallDict + 231
	13  python                              0x0000000108224091 slot_tp_init + 193
	14  python                              0x000000010822e051 type_call + 241
	15  python                              0x00000001081a12a3 _PyObject_FastCallKeywords + 179
	16  python                              0x00000001082e1c24 call_function + 404
	17  python                              0x00000001082df7ae _PyEval_EvalFrameDefault + 45518
	18  python                              0x00000001081a1098 function_code_fastcall + 120
	19  python                              0x00000001082e1b3e call_function + 174
	20  python                              0x00000001082df6ba _PyEval_EvalFrameDefault + 45274
	21  python                              0x00000001082d3332 _PyEval_EvalCodeWithName + 418
	22  python                              0x00000001081a05a7 _PyFunction_FastCallDict + 231
	23  python                              0x00000001081a44c2 method_call + 130
	24  python                              0x00000001081a1ef4 PyObject_Call + 100
	25  python                              0x00000001082df905 _PyEval_EvalFrameDefault + 45861
	26  python                              0x00000001082d3332 _PyEval_EvalCodeWithName + 418
	27  python                              0x00000001081a05a7 _PyFunction_FastCallDict + 231
	28  python                              0x00000001082df905 _PyEval_EvalFrameDefault + 45861
	29  python                              0x00000001082d3332 _PyEval_EvalCodeWithName + 418
	30  python                              0x00000001081a17a3 _PyFunction_FastCallKeywords + 195
	31  python                              0x00000001082e1b3e call_function + 174
	32  python                              0x00000001082df7ae _PyEval_EvalFrameDefault + 45518
	33  python                              0x00000001082d3332 _PyEval_EvalCodeWithName + 418
	34  python                              0x00000001081a17a3 _PyFunction_FastCallKeywords + 195
	35  python                              0x00000001082e1b3e call_function + 174
	36  python                              0x00000001082df6f5 _PyEval_EvalFrameDefault + 45333
	37  python                              0x00000001081a1098 function_code_fastcall + 120
	38  python                              0x00000001082e1b3e call_function + 174
	39  python                              0x00000001082df6f5 _PyEval_EvalFrameDefault + 45333
	40  python                              0x00000001082d3332 _PyEval_EvalCodeWithName + 418
	41  python                              0x0000000108337c50 PyRun_FileExFlags + 256
	42  python                              0x00000001083370c7 PyRun_SimpleFileExFlags + 391
	43  python                              0x00000001083637dc pymain_main + 9564
	44  python                              0x0000000108173a6d main + 125
	45  libdyld.dylib                       0x00007fff5f92808d start + 1
	46  ???                                 0x0000000000000008 0x0 + 8
)
libc++abi.dylib: terminating with uncaught exception of type NSException
Abort trap: 6

I will need to reproduce this error on another system to see if this is a common problem or just an issue with my specific install.

If a user encounters this error, avoid using any plotting routines or plotting options (i.e., no -p flag)

Add a conda recipe

🚀 Feature

Thanks for the great work! It would be really nice if this were available via conda-forge, as well.

Motivation

Integrating with other packages that need conda-specific libraries would be easier.

Pitch

I have a reasonable amount of experience dealing with conda recipes, so I can take care of it.

Alternatives

N.A.

Additional context

Already working on the recipe: conda-forge/staged-recipes#15441

Nyul normalize not working with custom output range

🐛 Bug

Hi Jacob.

My post is more a question about the functionality of your code and not a bug!

Iam trying to use your implementation of Nyuls intensity nromalziation algorithm to match histograms between computed tomography (CT) images and synthetic CT images (even though Iam aware of that it is written for MRI images).
ITK has an implementation of Nyul's algorithm, were it is possible to match the histogram between a source image and a template image, where the CDF of the source image is matched against the CDF of the template image.
The matched image is returned having the same range as the template image (-1000, 3096 for a 12-bit CT image).

The ITK version works fine, though I want to use several template images to compute the standard histogram, which does not seem to be possible.

When I run my images through your code, the output image has a totally different range, which I guess depends on how you set (defaultvalues between 0-100).
The images used to compute the standard histogram all have the same range (-1000, 3096) in my case.
Nevertheless, when I set the range to -1000 , 3096 the max values of the output image are only at around 300.

So my question is, how can I maintain the range when fitting my input images and how can I normalize a new image so that is is returned in the same range as the fitted ones?

Thanks in advance,

kind regards,

M

Expected behavior

I expected the matched images the range pecified, in example if fitted with -1000 - 3096, all normalized images should be in that range.

Environment

  • intensity-normalization version (e.g., 0.3.8): latest
  • numpy version (e.g., 1.0): 1.19.3
  • OS (e.g., Linux): Windows
  • How you installed intensity-normalization (conda, pip, source): source
  • Build command you used (if compiling from source): ---
  • Python version: 3.9.0
  • Any other relevant information: ---

Memory error using `ravel-normalize`

Hello!

In attempting to run ravel-normalize, using the command:

ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1

(all files MNI-registered in Nifti format). The data is a patient set of 10. I have run into this error:

ravel-normalize -i ./T1 -m ./mask -o ./norm -c t1 
2019-04-16 11:19:53,762 - intensity_normalization.exec.ravel_normalize - ERROR - 
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/exec/ravel_normalize.py", line 86, in main
    use_fcm=not args.use_atropos)
  File "/usr/local/lib/python3.6/dist-packages/intensity_normalization-1.3.1-py3.6.egg/intensity_normalization/normalize/ravel.py", line 93, in ravel_normalize
    _, _, vh = np.linalg.svd(Vc)
  File "/home/jatlab-remote/.local/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 1612, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
MemoryError

I executed ravel-normalize on two computers, both Ubuntu 18.04 LTS, one with 8GB and the other with 64GB of RAM. The T1s are ~7.0MiB/nifti and the masks are ~120KiB/nifti.

I searched Google for similar issues but without success. Any thoughts?

Add support for BIDS-structured projects

🚀 Feature

Add support for BIDS-structured projects

Motivation

BIDS is a standard way to structure neuroimaging projects. The intensity-normalization package should support this workflow.

Njul histogram matching

📚 Documentation

Hi and thanks for sharing your code!
Maybe the documentation section is not the right place for my post, bu I'll give it a try :-)

There is an implementation of Njus normalization algorithm in ITK / SimpleITK that lets one perform histogram matching between a source image and a template image (HistogramMatchingFilter).

Would this be possible using your implementation as well?

Thanks in advance,

kind regards,

M

Coregister of mri and label (label register is not good)

Hello,
I am trying to make coregister to my dataset and use one of BRATS19 image as template. All images registered right but when i tried to register label with it i got it wrong. I think that label after coregister is not right or higher than mri image.
I also tried to use genericLabel for label images. Maybe i did it wrong?
if 'mask' not in base:
moved = ants.apply_transforms(template, input_img, mytx['fwdtransforms'], interpolator='nearestNeighbor')
if 'mask' in base:
moved = ants.apply_transforms(template, input_img, mytx['fwdtransforms'], interpolator='genericLabel')

Images after coregister in MITK Workbench:
red is label and for 1 it is too smal. for 2 there must be no red.

1
2

What can be the problem? Or how can i register labels?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.