Giter Club home page Giter Club logo

deepwmh's Introduction

Paper

Chenghao Liu, Zhizheng Zhuo, Liying Qu, Ying Jin, Tiantian Hua, Jun Xu, Guirong Tan, Yuna Li, Yunyun Duan, Tingting Wang, Zaiqiang zhang, Yanling zhang, Rui Chen, Pinnan Yu, Peixin Zhang, Yulu Shi, Jianguo Zhang, Decai Tian, Runzhi Li, Xinghu Zhang, Fudong Shi, Yanli Wang, Jiwei Jiang, Aaron Carass, Yaou Liu, Chuyang Ye. "DeepWMH: a deep learning tool for accurate white matter hyperintensity segmentation without requiring manual annotations for training". Science Bulletin, Jan 2024.

Supplementary materials: https://drive.google.com/file/d/13Kyk0v19kRnJ40JC22-p_YK2urnzZcvh/view?usp=sharing

Table of contents

About DeepWMH

  • If you have any questions or find any bugs in the code, please open an issue or create a pull request!

DeepWMH is an annotation-free white matter hyperintensities (WMH) segmentation tool based on deep learning, designed for accurately segmenting WMH lesions using T2FLAIR images without labeled data. An overview of the whole processing pipeline is shown below.

DeepWMH lesion segmentation pipeline overview.

The figure below shows a more detailed version of the processing pipeline. Please refer to the supplementary materials for more information.

Method details.

Python Requirement

Before installation, you may need to update your Python version if it is lower than "3.7.1". Also, this tool is based on Python 3, Python 2 is deprecated and should no longer be used anymore.

Quick start: how to use our pretrained model (only for Linux-based systems)

The fastest way of applying our tool to your research is by using our pre-trained model directly. To use our pre-trained model for inference, please follow the steps below:

  1. Update your Python environment. Then, create a new virtual environment using the following commands:

    pip install -U pip                         # update pip
    pip install -U setuptools                  # update setuptools
    pip install wheel                          # install wheel
    python -m pip install --user virtualenv    # install virtualenv
    python -m venv <path_to_your_virtual_env>  # create a virtual environment under <path_to_your_virtual_env>

    for more detailed information about how to create a Python 3 virtual environment, please refer to this link.

  2. Activate the virtual environment you just created:

    source <path_to_your_virtual_env>/bin/activate

    NOTE: the virtual environment should ALWAYS be activated during the following steps.

  3. Install PyTorch in your virtual environment. See https://pytorch.org/ for more info.

  4. Download nnU-Net source code from "https://github.com/lchdl/nnUNet_for_DeepWMH".

    PLEASE download the forked version above, DO NOT download from https://github.com/MIC-DKFZ/nnUNet as the forked version has made some necessary changes.

    Unzip the code, "cd" into the directory where setup.py is located, then execute the following command:

    pip install -e .

    This will install customized nnU-Net and all its dependencies into your environment.

    Please install PyTorch BEFORE nnU-Net as suggested in https://github.com/MIC-DKFZ/nnUNet, and make sure your CUDA/cuDNN version is compatible with your GPU driver version.

  5. Download DeepWMH from "https://github.com/lchdl/DeepWMH". Unzip the code, "cd" into the directory where setup.py is located, then execute the following command:

    pip install -e .
  6. Download and unzip ROBEX (Robust Brain Extraction) from "https://www.nitrc.org/projects/robex",

    Download ROBEX binary.

    then add:

    export ROBEX_DIR="/path/to/your/unzipped/ROBEX/dir/"

    in your ~/.bashrc, make sure "runROBEX.sh" is in this directory, then

    source ~/.bashrc

    to update the change.

  7. (Optional) compile & install ANTs toolkit from "https://github.com/ANTsX/ANTs", or download the pre-compiled binaries here. This is mainly for intensity correction. You need to add ANTs binaries to "PATH" in your ~/.bashrc before using them.

    NOTE: You can also skip this step if you don't want to install it. However, the segmentation performance can be seriously affected if the image is corrupted by strong intensity bias due to magnetic field inhomogeneity. For optimal performance I strongly recommend you to install ANTs.

    Verify your install: to see whether ANTs is installed correctly in your system, after the installation you need to type in

    antsRegistration --version
    

    and

    N4BiasFieldCorrection --version
    

    in your console. It should produce output such as:

    ANTs Version: 3.0.0.0.dev13-ga16cc
    Compiled: Jan 22 2019 00:23:29
    

    Then test if antsApplyTransforms can work:

    antsApplyTransforms
    

    if no error shows, then ANTs is successfully installed.

  8. (Optional) verify your install:

    1. activate your virtual environment
    2. enter Python by typing and running:
    python
    1. then, enter & run the following script line by line:
    from deepwmh.main.integrity_check import check_system_integrity
    check_system_integrity(verbose=True, ignore_ANTs=True, ignore_FreeSurfer=True, ignore_FSL=True)
    1. if something is missing, you will see some error messages popping up. Some tips about how to fix the errors are also given. You can follow the tips to fix those problems and repeat Step 8 to verify your install until no error occurs.
  9. After installation, run

    DeepWMH_predict -h

    if no error occurs, then the installation is complete! Now you are ready to use our pretrained model for segmentation.

  10. Download our pre-trained model (~200 MB) from

    1. "https://drive.google.com/drive/folders/1CDJkY5F95sW638UGjohWDqXvPtBTI1w3?usp=share_link" or
    2. "https://pan.baidu.com/s/1j7aESa4NEcu95gsHLR9BqQ?pwd=yr3o"

    Then use

    DeepWMH_install -m <tar_gz_file> -o <model_install_dir>

    to install model (as indicated by <tar_gz_file>) to a specific location (as indicated by <model_install_dir>).

  11. Using pre-trained model to segment WMH lesions from FLAIR images with the following command:

    DeepWMH_predict -i <input_images> -n <subject_names> -m <model_install_dir> -o <output_folder> -g <gpu_id>

    if you don't have ANTs toolkit installed in your machine (see Step 7 for how to install ANTs), you need to add "--skip-bfc" to the end of the command, such as:

    DeepWMH_predict -i <...> -n <...> -m <...> -o <...> -g <...> --skip-bfc

    the segmentation performance can be seriously affected if the image is corrupted by strong intensity bias due to magnetic field inhomogeneity

    NOTE: you can specify multiple input images. The following command gives a complete example of this:

    DeepWMH_predict \
    -i /path/to/FLAIR_1.nii.gz /path/to/FLAIR_2.nii.gz /path/to/FLAIR_3.nii.gz \ # specify three FLAIR images
    -n subject_1               subject_2               subject_3 \               # then give three subject names
    -m /path/to/your/model/dir/ \                                                # model installation directory
    -o /path/to/your/output/dir/ \                                               # output directory
    -g 0                                                                         # gpu index

Advanced: how to train a model using data of my own? (only for Linux-based systems)

  1. Follow the Steps 1--7 in Quick start section. Note that if you want to train a custom model, you must download and compile ANTs toolkit, Step 7 in Quick start section is no longer optional.

  2. Download and install FreeSurfer. Note that you may also need to install "csh" and "tcsh" shell by running

    sudo apt-get install csh tcsh

    after the installation.

    A license key (link) is also needed before using FreeSurfer.

  3. Download and install FSL (FMRIB Software Library).

    How to install: FSL is installed using the fsl_installer.py downloaded from here. You need to register your personal information to the FSL site before download. After download you need to run the installer script and wait for the installation to finish.

    Verify your install: when the installation finished, type in

    bet -h
    

    in your console. If no error occurs then everything is OK! :)

  4. (Optional) verify your install:

    1. activate your virtual environment
    2. enter Python by typing and running:
    python
    1. then, enter & run the following script line by line:
    from deepwmh.main.integrity_check import check_system_integrity
    check_system_integrity()
    1. if something is missing, you will see some error messages popping up. Some tips about how to fix the errors are also given. You can follow the tips to fix those problems and repeat Step 4 to verify your install until no error occurs (as shown below).

      integrity check

  5. Here we provided two examples of using a public dataset (OASIS-3) to train a model from scratch, see

    experiments/010_OASIS3/run_Siemens_Biograph_mMR.py
    experiments/010_OASIS3/run_Siemens_TrioTim.py
    

    you can also run these two examples if you organized the dataset structure correctly:

    cd .../experiments/010_OASIS3/
    python run_Siemens_Biograph_mMR.py
    cd .../experiments/010_OASIS3/
    python run_Siemens_TrioTim.py

deepwmh's People

Contributors

lchdl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

zkan12

deepwmh's Issues

packages conflict

in the 5th step "Download DeepWMH", after cd DeepWMH folder and execute pip install -e ., I got a wrong message which is shown in the fellowing picture
捕获
when i try execute pip install scipy==1.10, then I got the following wrong
捕获
so, would you mind to help me to solve this packages confliction problems?

integrity check readme minor correction

Greetings, the .bat file that is used from the integrity check is not ROBEX but runROBEX.bat (like runROBEX.sh) thus it always throws an error, recognizing it cannot be located. Changing the python code from ROBEX to the actual executable name solves the problem. Please correct me if I'm wrong, this is a speculation based on the newest ROBEX distribution.

Recommendation on inference (tuning !?)

Greetings!

I just upgraded my pc successfully and now I am able to run consistently experiments with your tool and that's really awesome!

I would like to make some questions:

  1. What's the difference of raw - 3mm or fov? Which one should I use that refers to the actual image out of the folders located at /DeepWMH/output/002_Segmentations/?
  2. Is there any hyperparameter that I could adjust during inference? I think I've read somewhere in the paper that for example a threshold value is automatically adjustable, is there anything that I could configure and test?
  3. Can you train further a pretrained model of yours based on a new dataset? And if so, would you suggest it?
  4. From what I understand, from the paper again, is that I should input FLAIR of whatever slice length?
  5. What tool do you use to open the gif generated from the preview folder (output) it seems that it will not open for me as it supposedly contains errors.
  6. The segmentation mask produced is binary (black and white)? Or is there any configuration that allows different colours (like presented - dark yellow, light yellow, purple, red, dark red etc.) which will represent the low/high chance of anomaly?

I hope I am not bothering you too much! Please take your time and answer whenever and on whatever that it is possible!

Theodore Tranos

Having an issue with running inference.

I would really appreciate if you could help me find the problem that causes this error.

I will input only the last few lines of the error that have meaning, my prompt was:

DeepWMH_predict -i example_flair.nii -n 0 -m models -o output

Error:

using model stored in  /Users/theodoretranos/Desktop/DeepWMH/models/nnUNet/3d_fullres/Task002_FinalModel/nnUNetTrainerV2__nnUNetPlansv2.1
This model expects 1 input modalities for each image
Found 1 unique case ids, here are some examples: ['0']
If they don't look right, make sure to double check your filenames. They must end with _0000.nii.gz etc
selected cases to predict: ['0']
number of cases: 1
number of cases that still need to be predicted: 1
emptying cuda cache
loading parameters for folds, ['all']
set max_num_epochs=300
set num_batches_per_epoch=250
no validation set? False
using the following model files:  ['/Users/theodoretranos/Desktop/DeepWMH/models/nnUNet/3d_fullres/Task002_FinalModel/nnUNetTrainerV2__nnUNetPlansv2.1/all/model_best.model']
starting preprocessing generator
starting prediction...
Traceback (most recent call last):
  File "/Users/theodoretranos/anaconda3/envs/brain/bin/nnUNet_predict", line 33, in <module>
    sys.exit(load_entry_point('nnunet', 'console_scripts', 'nnUNet_predict')())
  File "/Users/theodoretranos/Desktop/DeepWMH/nnUNet_for_DeepWMH-develop/nnunet/inference/predict_simple.py", line 214, in main
    predict_from_folder(model_folder_name, input_folder, output_folder, folds, save_npz, num_threads_preprocessing,
  File "/Users/theodoretranos/Desktop/DeepWMH/nnUNet_for_DeepWMH-develop/nnunet/inference/predict.py", line 412, in predict_from_folder
    return predict_cases(model, list_of_lists[part_id::num_parts], output_files[part_id::num_parts], folds,
  File "/Users/theodoretranos/Desktop/DeepWMH/nnUNet_for_DeepWMH-develop/nnunet/inference/predict.py", line 211, in predict_cases
    for preprocessed in preprocessing:
  File "/Users/theodoretranos/Desktop/DeepWMH/nnUNet_for_DeepWMH-develop/nnunet/inference/predict.py", line 112, in preprocess_multithreaded
    pr.start()
  File "/Users/theodoretranos/anaconda3/envs/brain/lib/python3.10/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/Users/theodoretranos/anaconda3/envs/brain/lib/python3.10/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/Users/theodoretranos/anaconda3/envs/brain/lib/python3.10/multiprocessing/context.py", line 288, in _Popen
    return Popen(process_obj)
  File "/Users/theodoretranos/anaconda3/envs/brain/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/Users/theodoretranos/anaconda3/envs/brain/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/Users/theodoretranos/anaconda3/envs/brain/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/Users/theodoretranos/anaconda3/envs/brain/lib/python3.10/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x14f590430>: attribute lookup <lambda> on nnunet.utilities.nd_softmax failed

>>> ** Unexpected return value "1" from command:

>>> nnUNet_predict -i /Users/theodoretranos/Desktop/DeepWMH/output/001_Preprocessed_Images -o /Users/theodoretranos/Desktop/DeepWMH/output/002_Segmentations/001_raw -tr nnUNetTrainerV2 -m 3d_fullres -p nnUNetPlansv2.1 -t Task002_FinalModel -f all -chk model_best --disable_post_processing --selected_cases 0 

>>> ** Process will exit now.

Error occurred (code 1) when executing command:
"nnUNet_predict -i /Users/theodoretranos/Desktop/DeepWMH/output/001_Preprocessed_Images -o /Users/theodoretranos/Desktop/DeepWMH/output/002_Segmentations/001_raw -tr nnUNetTrainerV2 -m 3d_fullres -p nnUNetPlansv2.1 -t Task002_FinalModel -f all -chk model_best --disable_post_processing --selected_cases 0 "

Theodore Tranos

_parallel_ROBEX_masking crash

Hi,

Having a bit of an issue with the script that I'm not sure how to fix. I'll paste the end of the terminal below:

This worker has ended successfully, no errors to report
predicting /home/lab2/WMH/testdata/output/002_Segmentations/001_raw/P3V1.nii.gz
debug: mirroring True mirror_axes (0, 1, 2)
step_size: 0.5
do mirror: True
data shape: (1, 255, 255, 223)
patch size: [112 160 128]
steps (x, y, and z): [[0, 48, 95, 143], [0, 48, 95], [0, 48, 95]]
number of tiles: 36
computing Gaussian
prediction done
inference done. Now waiting for the segmentation export to finish...
force_separate_z: None interpolation order: 1
separate z: False lowres axis None
3mm postproc : |==================| 100% << 1s|0s
masking : |                  | 0% << 0s -

==========
One of the worker process crashed due to unhandled exception.
Worker function name is: "_parallel_ROBEX_masking".

** Here is the traceback message:

Traceback (most recent call last):
  File "/home/lab2/WMH/DeepWMH/deepwmh/utilities/parallelization.py", line 30, in __call__
    result = self.__callable_object(*args, **kwargs)
  File "/home/lab2/WMH/DeepWMH/deepwmh/main/predict.py", line 43, in _parallel_ROBEX_masking
    run_shell('%s %s %s %s' % (ROBEX_sh, in_flair, brain_out, brain_mask), print_output=False)
  File "/home/lab2/WMH/DeepWMH/deepwmh/utilities/external_call.py", line 72, in run_shell
    exit('Error occurred (code %s) when executing command:\n"%s"' % (s, command))
  File "/usr/lib/python3.10/_sitebuiltins.py", line 26, in __call__
    raise SystemExit(code)
SystemExit: Error occurred (code 126) when executing command:
"/home/lab2/WMH/ROBEX/runROBEX.sh /home/lab2/WMH/testdata/output/001_Preprocessed_Images/P3V1_0000.nii.gz /home/lab2/WMH/testdata/output/002_Segmentations/003_postproc_fov/P3V1_brain.nii.gz /home/lab2/WMH/testdata/output/002_Segmentations/003_postproc_fov/P3V1_mask.nii.gz"

** Main process will exit now.

This was the input I used, testing it on one dataset:

DeepWMH_predict -i ~/WMH/testdata/P3V1.nii.gz -n P3V1 -m ~/WMH/model -o ~/WMH/testdata/output -g 0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.