Giter Club home page Giter Club logo

mic-dkfz / nndetection Goto Github PK

View Code? Open in Web Editor NEW
508.0 18.0 82.0 1.27 MB

nnDetection is a self-configuring framework for 3D (volumetric) medical object detection which can be applied to new data sets without manual intervention. It includes guides for 12 data sets that were used to develop and evaluate the performance of the proposed method.

License: Apache License 2.0

Dockerfile 0.20% Python 98.87% C++ 0.15% C 0.03% Cuda 0.70% Shell 0.04%
detection medical medical-imaging medical-image-computing 3d-object-detection pytorch-implementation retina-unet

nndetection's Introduction

Version Python CUDA

What is nnDetection?

Simultaneous localisation and categorization of objects in medical images, also referred to as medical object detection, is of high clinical relevance because diagnostic decisions depend on rating of objects rather than e.g. pixels. For this task, the cumbersome and iterative process of method configuration constitutes a major research bottleneck. Recently, nnU-Net has tackled this challenge for the task of image segmentation with great success. Following nnU-Net’s agenda, in this work we systematize and automate the configuration process for medical object detection. The resulting self-configuring method, nnDetection, adapts itself without any manual intervention to arbitrary medical detection problems while achieving results en par with or superior to the state-of-the-art. We demonstrate the effectiveness of nnDetection on two public benchmarks, ADAM and LUNA16, and propose 10 further public data sets for a comprehensive evaluation of medical object detection methods.

If you use nnDetection please cite our paper:

Baumgartner M., Jäger P.F., Isensee F., Maier-Hein K.H. (2021) nnDetection: A Self-configuring Method for Medical Object Detection. In: de Bruijne M. et al. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science, vol 12905. Springer, Cham. https://doi.org/10.1007/978-3-030-87240-3_51

🎉 nnDetection was early accepted to the International Conference on Medical Image Computing & Computer Assisted Intervention 2021 (MICCAI21) 🎉

Installation

Docker

The easiest way to get started with nnDetection is the provided is to build a Docker Container with the provided Dockerfile.

Please install docker and nvidia-docker2 before continuing.

All projects which are based on nnDetection assume that the base image was built with the following tagging scheme nnDetection:[version]. To build a container (nnDetection Version 0.1) run the following command from the base directory:

docker build -t nndetection:0.1 --build-arg env_det_num_threads=6 --build-arg env_det_verbose=1 .

(--build-arg env_det_num_threads=6 and --build-arg env_det_verbose=1 are optional and are used to overwrite the provided default parameters)

The docker container expects data and models in its own /opt/data and /opt/models directories respectively. The directories need to be mounted via docker -v. For simplicity and speed, the ENV variables det_data and det_models can be set in the host system to point to the desired directories. To run:

docker run --gpus all -v ${det_data}:/opt/data -v ${det_models}:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash

Warning: When running a training inside the container it is necessary to increase the shared memory (via --shm-size).

Source

Please note that nndetection requires Python 3.8+. Please use PyTorch 1.X version for now and not 2.0

  1. Install CUDA (>10.1) and cudnn (make sure to select compatible versions!)
  2. [Optional] Depending on your GPU you might need to set TORCH_CUDA_ARCH_LIST, check compute capabilities here.
  3. Install torch (make sure to match the pytorch and CUDA versions!) (requires pytorch >1.10+) and torchvision(make sure to match the versions!).
  4. Clone nnDetection, cd [path_to_repo] and pip install -e .
  5. Set environment variables (more info can be found below):
    • det_data: [required] Path to the source directory where all the data will be located
    • det_models: [required] Path to directory where all models will be saved
    • OMP_NUM_THREADS=1 : [required] Needs to be set! Otherwise bad things will happen... Refer to batchgenerators documentation.
    • det_num_threads: [recommended] Number processes to use for augmentation (at least 6, default 12)
    • det_verbose: [optional] Can be used to deactivate progress bars (activated by default)
    • MLFLOW_TRACKING_URI: [optional] Specify the logging directory of mlflow. Refer to the mlflow documentation for more information.

Note: nnDetection was developed on Linux => Windows is not supported.

Test Installation
Run the following command in the terminal (!not! in pytorch root folder) to verify that the compilation of the C++/CUDA code was successfull:
python -c "import torch; import nndet._C; import nndet"

To test the whole installation please run the Toy Data set example.

Maximising Training Speed
To get the best possible performance we recommend using CUDA 11.0+ with cuDNN 8.1.X+ and a (!)locally compiled version(!) of Pytorch 1.7+

nnDetection

nnDetection Module Overview

nnDetection uses multiple Registries to keep track of different modules and easily switch between them via the config files.

Config Files nnDetection uses Hydra to dynamically configure and compose configurations. The configuration files are located in nndet.conf and can be overwritten to customize the behavior of the pipeline.

AUGMENTATION_REGISTRY The augmentation registry can be imported from nndet.io.augmentation and contains different augmentation configurations. Examples can be found in nndet.io.augmentation.bg_aug.

DATALOADER_REGISTRY The dataloader registry contains different dataloader classes to customize the IO of nnDetection. It can be imported from nndet.io.datamodule and examples can be found in nndet.io.datamodule.bg_loader.

PLANNER_REGISTRY New plans can be registered via the planner registry which contains classes to define and perform different architecture and preprocessing schemes. It can be imported from nndet.planning.experiment and examples can be found in nndet.planning.experiment.v001.

MODULE_REGISTRY The module registry contains the core modules of nnDetection which inherits from the Pytorch Lightning Module. It is the main module which is used for training and inference and contains all the necessary steps to build the final models. It can be imported from nndet.ptmodule and examples can be found in nndet.ptmodule.retinaunet.

nnDetection Functional Details

Experiments & Data

The data sets used for our experiments are not hosted or maintained by us, please give credit to the authors of the data sets. Some of the labels were corrected in data sets which we converted and can be downloaded (links can be found in the guides). The Experiments section contains multiple guides which explain the preparation of the data sets via the provided scripts.

Toy Data set

Running nndet_example will automatically generate an example data set with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code (it is still necessary to run the other nndet commands to process/train/predict the data set).

# create data to test installation/environment (10 train 10 test)
nndet_example

# create full data set for prototyping (1000 train 1000 test)
nndet_example --full [--num_processes]

The full problem is very easy and the final results should be near perfect. After running the generation script follow the Planning, Training and Inference instructions below to construct the whole nnDetection pipeline.

Guides

Work in progress

Experiments

Besides the self-configuring method, nnDetection acts as a standard interface for many data sets. We provide guides to prepare all data sets from our evaluation to the correct and make it easy to reproduce our resutls. Furthermore, we provide pretrained models which can be used without investing large amounts of compute to rerun our experiments (see Section Pretrained Models).

Adding New Data sets

nnDetection relies on a standardized input format which is very similar to nnU-Net and allows easy integration of new data sets. More details about the format can be found below.

Folders

All data sets should reside inside Task[Number]_[Name] folders inside the specified detection data folder (the path to this folder can be set via the det_data environment flag). To avoid conflicts with our provided pretrained models we recommend to use task numbers starting from 100. An overview is provided below ([Name] symbolise folders, - symbolise files, indents refer to substructures)

Warning[!]: Please avoid any . inside file names/folder names/paths since it can influence how paths/names are splitted.

${det_data}
    [Task000_Example]
        - dataset.yaml # dataset.json works too
        [raw_splitted]
            [imagesTr]
                - case0000_0000.nii.gz # case0000 modality 0
                - case0000_0001.nii.gz # case0000 modality 1
                - case0001_0000.nii.gz # case0001 modality 0
                - case0000_0001.nii.gz # case0001 modality 1
            [labelsTr]
                - case0000.nii.gz # instance segmentation case0000
                - case0000.json # properties of case0000
                - case0001.nii.gz # instance segmentation case0001
                - case0001.json # properties of case0001
            [imagesTs] # optional, same structure as imagesTr
             ...
            [labelsTs] # optional, same structure as labelsTr
             ...
    [Task001_Example1]
        ...

Data set Info

dataset.yaml or dataset.json provides general information about the data set: Note: [Important] Classes and modalities start with index 0!

task: Task000D3_Example

name: "Example" # [Optional]
dim: 3 # number of spatial dimensions of the data

# Note: need to use integer value which is defined below of target class!
target_class: 1 # [Optional] define class of interest for patient level evaluations
test_labels: True # manually splitted test set

labels: # classes of data set; need to start at 0
    "0": "Square"
    "1": "SquareHole"

modalities: # modalities of data set; need to start at 0
    "0": "CT"

Image Format

nnDetection uses the same image format as nnU-Net. Each case consists of at least one 3D nifty file with a single modality and are saved in the images folders. If multiple modalities are available, each modality uses a separate file and the sequence number at the end of the name indicates the modality (these need to correspond to the numbers specified in the data set file and be consistent across the whole data set).

An example with two modalities could look like this:

- case001_0000.nii.gz # Case ID: case001; Modality: 0
- case001_0001.nii.gz # Case ID: case001; Modality: 1

- case002_0000.nii.gz # Case ID: case002; Modality: 0
- case002_0001.nii.gz # Case ID: case002; Modality: 1

If multiple modalities are available, please check beforehand if they need to be registered and perform registration befor nnDetection preprocessing. nnDetection does (!)not(!) include automatic registration of multiple modalities.

Label Format

Labels are encoded with two files per case: one nifty file which contains the instance segmentation and one json file which includes the "meta" information of each instance. The nifty file should contain all annotated instances where each instance has a unique number and are in consecutive order (e.g. 0 ALWAYS refers to background, 1 refers to the first instance, 2 refers to the second instance ...) case[XXXX].json label files need to provide the class of every instance in the segmentation. In this example the first isntance is assigned to class 0 and the second instance is assigned to class 1:

{
    "instances": {
        "1": 0,
        "2": 1
    }
}

Each label file needs a corresponding json file to define the classes. We also wrote an Detection Annotation Guide which includes a dedicated section of the nnDetection format with additional visualizations :)

Using nnDetection

The following paragrah provides an high level overview of the functionality of nnDetection and which commands are available. A typical flow of commands would look like this:

nndet_prep -> nndet_unpack -> nndet_train -> nndet_consolidate -> nndet_predict

Eachs of this commands is explained below and more detailt information can be obtained by running nndet_[command] -h in the terminal.

Planning & Preprocessing

Before training the networks, nnDetection needs to preprocess and analyze the data. The preprocessing stage normalizes and resamples the data while the analyzed properties are used to create a plan which will be used for configuring the training. nnDetectionV0 requires a GPU with approximately the same amount of VRAM you are planning to use for training (we used a RTX2080TI; no monitor attached to it) to perform live estimation of the VRAM used by the network. Future releases aim at improving this process...

nndet_prep [tasks] [-o / --overwrites] [-np / --num_processes] [-npp / --num_processes_preprocessing] [--full_check]

# Example
nndet_prep 000

# Script
# /scripts/preprocess.py - main()

-o option can be used to overwrite parameters for planning and preprocessing (refer to the config files to see all parameters). The number of processes used for cropping and analysis can be adjusted by using -np and the number of processes used for resampling can be set via -npp. The current values are fairly save if 64GB of RAM is available. The --full_check will iterate over the data before starting any preprocessing and check correct formatting of the data and labels. If any problems occur during preprocessing please run the full check to make sure that the format is correct.

After planning and preprocessing the resulting data folder structure should look like this:

[Task000_Example]
    [raw_splitted]
    [raw_cropped] # only needed for different resampling strategies
        [imagesTr] # stores cropped image data; contains npz files
        [labelsTr] # stores labels
    [preprocessed]
        [analysis] # some plots to visualize properties of the underlying data set
        [properties] # sufficient for new plans
        [labelsTr] # labels in original format (original spacing)
        [labelsTs] # optional
        [Data identifier; e.g. D3V001_3d]
            [imagesTr] # preprocessed data
            [labelsTr] # preprocessed labels (resampled spacing)
        - {name of plan}.pkl e.g. D3V001_3d.pkl

Befor starting the training copy the data (Task Folder, data set info and preprocessed folder are needed) to a SSD (highly recommended) and unpack the image data with

nndet_unpack [path] [num_processes]

# Example (unpack example with 6 processes)
nndet_unpack ${det_data}/Task000D3_Example/preprocessed/D3V001_3d/imagesTr 6

# Script
# /scripts/utils.py - unpack()

Training and Evaluation

After the planning and preprocessing stage is finished the training phase can be started. The default setup of nnDetection is trained in a 5 fold cross-validation scheme. First, check which plans were generated during planning by checking the preprocessing folder and look for the pickled plan files. In most cases only the defaul plan will be generated (D3V001_3d) but there might be instances (e.g. Kits) where the low resolution plan will be generated too (D3V001_3dlr1).

nndet_train [task] [-o / --overwrites] [--sweep]

# Example (train default plan D3V001_3d and search best inference parameters)
nndet_train 000 --sweep

# Script
# /scripts/train.py - train()

Use -o exp.fold=X to overwrite the trained fold, this should be run for all folds X = 0, 1, 2, 3, 4! The --sweep option tells nnDetection to look for the best hyparameters for inference by empirically evaluating them on the validation set. Sweeping can also be performed later by running the following command:

nndet_sweep [task] [model] [fold]

# Example (sweep Task 000 of model RetinaUNetV001_D3V001_3d in fold 0)
nndet_sweep 000 RetinaUNetV001_D3V001_3d 0

# Script
# /experiments/train.py - sweep()

Evaluation can be invoked by the following command (requires access to the model and preprocessed data):

nndet_eval [task] [model] [fold] [--test] [--case] [--boxes] [--seg] [--instances] [--analyze_boxes]

# Example (evaluate and analyze box predictions of default model)
nndet_eval 000 RetinaUNetV001_D3V001_3d 0 --boxes --analyze_boxes

# Script
# /scripts/train.py - evaluate()

# Note: --test invokes evaluation of the test set
# Note: --seg, --instances are placeholders for future versions and not working yet

Inference

After running all folds it is time to collect the models and creat a unified inference plan. The following command will copy all the models and predictions from the folds. By adding the sweep_ options, the empiricaly hyperparameter optimization across all folds can be started. This will generate a unified plan for all models which will be used during inference.

nndet_consolidate [task] [model] [--overwrites] [--consolidate] [--num_folds] [--no_model] [--sweep_boxes] [--sweep_instances]

# Example
nndet_consolidate 000 RetinaUNetV001_D3V001_3d --sweep_boxes

# Script
# /scripts/consolidate.py - main()

For the final test set predictions simply select the best model according to the validation scores and run the prediction command below. Data which is located in raw_splitted/imagesTs will be automatically preprocessed and predicted by running the following command:

nndet_predict [task] [model] [--fold] [--num_tta] [--no_preprocess] [--check] [-npp / --num_processes_preprocessing] [--force_args]

# Example
nndet_predict 000 RetinaUNetV001_D3V001_3d --fold -1

# Script
# /scripts/predict.py - main()

If a self-made test set was used, evaluation can be performed by invoking nndet_eval with --test as described above.

Results

The final model directory will contain multiple subfolders with different information:

  • sweep: contain information from the parameter sweeps and are only used for debugging purposes
  • sweep_predictions: these contain prediction with additional ensembler state information which are used during the empirical parameter optimization. Since these save the model output in a fairly raw format they are bigger than the predictions seen during normal inference to avoid multiple model prediction runs during the parameter sweeps
  • [val/test]_predictions: Contains the prediction of the validation/test set in the restored image space.
  • val_predictions_preprocessed: This contains prediction in the preprocessed image space, i.e. the predictions from the resampled and cropped data. they are saved for debugging purposes.
  • [val/test]_results: this folder contains the validation/test rsults computed by nnDetection. More information on the metrics can be found below.
  • val_results_preprocessed: contains validation results inside the preprocessed image space are saved for debugging purposes
  • val_analysis[_preprocessed] experimental: provide additional analysis information of the predictions. This feature is marked as expeirmental since it uses a simplified matching algorithm and should only be used to gain an intuition of potential improvements.

The following section contains some additional information regarding the metrics which are computed by nnDetection. They can be found in [val/test]_results/results_boxes.json:

  • AP_IoU_0.10_MaxDet_100: is the main metric used for the evaluation in our paper. It is evaluated at an IoU threshold of 0.1 and 100 predictions per image. Note that this is a hard limit and if images contain much more instances this leads to wrong results.
  • mAP_IoU_0.10_0.50_0.05_MaxDet_100: Is the typically found COCO mAP metric evaluated at multiple IoU values. The IoU thresholds are different from those of the COCO evaluation to account for the generally lower IoU in 3D data
  • [num]_AP_IoU_0.10_MaxDet_100: AP metric computed per class
  • FROC_score_IoU_0.10 FROC score with default FPPI (1/8, 1/4, 1/2, 1, 2, 4, 8). Note (in contrast to the AP implementation): the multi-class case does not compute the metric per class but puts all predictions/gt into a single large pool (similar to AP_pool from https://arxiv.org/abs/2102.01066) and thus inter class calibration is important here. In most cases simply averaging the [num]_FROC scores manually to assign the same weight to each class should be prefered.
  • case evaluation experimental: It is possible to run case evaluations with nnDetection but this is still experimental and undergoing additional testing and might be changed in the future.

nnU-Net for Detection

Besides nnDetection we also include the scripts to prepare and evaluate nnU-Net in the context of obejct detection. Both frameworks need to be configured correctly before running the scripts to assure correctness. After preparing the data set in the nnDetection format (which is a superset of the nnU-Net format) it is possible to export it to nnU-Net via scripts/nnunet/nnunet_export.py. Since nnU-Net needs task ids without any additions it may be necessary to overwrite the task name via the -nt option for some dataets (e.g. Task019FG_ADAM needs to be renamed to Task019_ADAM). Follow the usual nnU-Net preprocessing and training pipeline to generate the needed models. Use the --npz option during training to save the predicted probabilities which are needed to generate the detection results. After determining the best ensemble configuration from nnU-Net pass all paths to scripts/nnunet/nnunet_export.py which will ensemble and postprocess the predictions for object detection. Per default the nnU-Net Plus scheme will be used which incorporates the empirical parameter optimization step. Use --simple flag to switch to the nnU-Net basic configuration.

Pretrained models

Coming Soon

FAQ & Common Issues

Installation & Initial Setup Errors
  1. Error: Undefined CUDA symbols when importing nndet._C or other import related Errors from nndet._C or CUDA related ARCH errors nnDetection includes additional CUDA code which needs to compiled upon installation and thus requires correct configuration of the CUDA dependencies. Please double check CUDA version of your PC, pytorch, torchvision and nnDetection build. This can be done by running nndet_env if the installation succeeded or by running python scripts/utils.py. An example output of the command is shown below:
----- PyTorch Information -----
PyTorch Version: 1.11.0+cu113
PyTorch Debug: False
PyTorch CUDA: 11.3
PyTorch Backend cudnn: 8200
PyTorch CUDA Arch List: ['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86']
PyTorch Current Device Capability: (7, 5)
PyTorch CUDA available: True

----- System Information -----
System NVCC: nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Aug_15_21:14:11_PDT_2021
Cuda compilation tools, release 11.4, V11.4.120
Build cuda_11.4.r11.4/compiler.30300941_0

System Arch List: None
System OMP_NUM_THREADS: 1
System CUDA_HOME is None: True
System CPU Count: 8
Python Version: 3.8.11 (default, Aug  3 2021, 15:09:35)
[GCC 7.5.0]

----- nnDetection Information -----
det_num_threads 6
det_data is set True
det_models is set True

Things to look out for:

Make sure that the versions of PyTorch CUDA and NVCC CUDA match (minor version mismatch as in this case, will work without error but could potentially introduce bugs.)

OMP_NUM_THREADS should always be set to 1 and det_num_threads should always be lower or equal Systemm CPU Count.

  1. Error persists even after fixing the environment Make sure to delete the build folder before rerunning the installation since it won't recompile the code otherwise.

  2. Error: No kernel image is available for execution

You are probably executing the build on a machine with a GPU architecture which was not present/set during the build.

Please check link to find the correct SM architecture and set TORCH_CUDA_ARCH_LIST approriately (e.g. check Dockefile for example). As before make sure to delete the build folder when rerunning the installation process.

  1. Please open an Issue and provide your environment as obtained by nndet_env.
Training doesn't start or is stuck
  1. Please run nndet_env and make sure OMP_NUM_THREADS is set to 1. No other values are supported here. To increase the number of workers used for IO and augmentation adjust nndet_num_threads.

  2. Try running the training without multiprocessing as a sanity check: nndet_train XXX -o augment_cfg.multiprocessing=False. Don't use this for the full training, this is just one step of the debugging process.

  3. Please open an Issue and provide your environment as obtained by nndet_env and report if the training without multiprocessing started correctly.

(Slow) Training Speed

The training time of nnDetection should be roughly equal for most data sets: 2 days (1-2 hours per epoch) with mixed precision 3d speed up and 4 days without (this number refers to RTX 2080TI, newer GPUs can be significantly faster, on high end configuration training takes 1 day). It is highly recommended to use GPUs with Tensor Cores to enable fast mixed precision training for reasonable turnaround times. There can be several reasons for slow training:

  1. PyTorch < 1.9 did not provide training speedup for mixed-precision 3d convs in their pip installable version and it was necessary to build it from source. (the docker build of nnDetection also provides the speedup). Newer versions like 1.10 and 1.11 provide the mixed precision speedup in their pip version (only tested with CUDA 11.X).

  2. There is a bottleneck in the setup. This can be identified as follows:

    1. Check the GPU Util -> it should be high for most of the time if it isn't, there is either a CPU or IO bottleneck. If it is high it is the missing pytorch speed up.
    2. Check CPU util: if the CPU util is high (and the GPU util isn't) more cpu threads are needed for augmentation (can be adjusted via det_num_threads and depends on your CPU). If GPU and CPU util are low, it is an IO bottleneck, it is quite hard to do anything about this (a typical SSD with ~500mb/s read speed ran fine for my experiments). If the CPU util is maxed out it is an CPU bottleneck: Adjust det_num_threads (similar to num workers in the normal pytorch dataloaders) for the available CPU resources (set this as high as possible but not more than available CPU threads) otherwise. Increasing the number of workers will increase the required RAM consumption -> make sure not to run out of memory there otherwise the training will be extreeemly slow and the workstation might crash.

Example for det_num_threads:

  • CPUs with less cores but high clock speed: Needs a lower det_num_threads value. On an Intel i7 9700 (non k) det_num_threads=6 reaches 90+ % GPU usage.
  • CPUs with many cores but lower clock speed: Needs a high det_num_threads value. In cluster environments det_num_threads=12 reaches ~80+% GPU usage.
GPU requirements
nnDetection v0.1 was developed for GPUs with at least 11GB of VRAM (e.g. RTX2080TI, TITAN RTX). All of our experiments were conducted with a RTX2080TI. While the memory can be adjusted by manipulating the correct setting we recommend using the default values for now. Future releases will refactor the planning stage to improve the VRAM estimation and add support for different memory budgets.
Training with bounding boxes
The first release of nnDetection focuses on 3d medical images and Retina U-Net. As a consequence training (specifically planning and augmentation) requrie segmentation annotations. In many cases this limitation can be circumvented by converting the bounding boxes into segmentations.
Mask RCNN and 2D Data sets
2D data sets and Mask R-CNN are not supported in the first release. We hope to provide these in the future.
Multi GPU Training
Multi GPU training is not officially supported yet. Inference and the metric computation are not properly designed to support these usecases!
Prebuild package
We are planning to provide prebuild wheels in the future but no prebuild wheels are available right now. Please use the provided Dockerfile or the installation instructions to run nnDetection.

Acknowledgements

nnDetection combines the information from multiple open source repositores we wish to acknoledge for their awesome work, please check them out!

nnU-Net is self-configuring method for semantic segmentation and many steps of nnDetection follow in the footsteps of nnU-Net.

The Medical Detection Toolkit introduced the first codebase for 3D Object Detection and multiple tricks were transferred to nnDetection to assure optimal configuration for medical object detection.

nnDetection tried to follow the interfaces of torchvision to make it easy to understand for everyone coming from the 2D (and video) detection scene. As a result we used based our implementations of some of the core modules of the torchvision implementation.

Funding

Part of this work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 410981386 and the Helmholtz Imaging Platform (HIP), a platform of the Helmholtz Incubator on Information and Data Science.

nndetection's People

Contributors

alibool avatar dboun avatar joeranbosma avatar kapsner avatar kretes avatar machur avatar mibaumgartner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nndetection's Issues

test error of undefined symbol[Question]

❓ Question

python3.8
pytorch1.9.0 / 1.8.0
cuda11.1 /11.0
GCC 7.5.0
All install look successful, when I run python -c "import torch; import nndet._C; import nndet"
Here is the error,
ImportError: /nnDetection/nndet/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _Z8nms_cudaRKN2at6TensorES2_f

Is anybody has any idea about this error?

[Bug] '_seg' not allowed in Task name/case path

💀 Bug

For the past week, I've been trying to run a training but it continued to fail even though all other trainings I've done were completely fine. Preprocessing seemed fine but when running the nndet_train command, nnDetection couldn't find any files, but all my files where in the folders as expected like the other trainings I've done.
I thought it was a problem on my side so I've investigated it and it turns out it's on your side.

In file nnDetection/nndet/io/utils.py on line 50 you filter paths which have the substring '_seg' case_paths = [f for f in case_paths if "_seg" not in f].
Unfortunately, my task name was "Task504_segmwithmodalities", which has '_seg' in it. Hence, all the cases where removed from the list and no samples could be found in trying to create the train splits.
Please consider not using the complete absolute path or at least warn in your readme about path names with '_seg' in them.

[Bug] Lower boundary of clipping needs to be -1

💀 Bug

Environment

Please provide some information about the used environment.

How was nnDetection installed [docker | source]:

Environment Information:

[paste here]

If necessary, please provide the used run command with all overwrites:

[paste here]

How to change the patch size

❓ Question

Hi, I've been trying nnDetection and so far the results are really nice. I would like to change the patch size though. I've found how to overwrite it in the yaml file but whenever I do this I keep getting the error:
RuntimeError: The size of tensor a (19) must match the size of tensor b (20) at non-singleton dimension 4
What are the constrains to changing the patch size? I tried several different number but can't seem to find a good fit.

[Question] Breaks during preprocessing of Task007_Pancreas

❓ Question

I was prepareing the pancreas dataset and the case ids have not been extracted flawlessly. I was able to fix it and make it run by setting the remove_modality flag to False. To be precise line 188 in projects/Task001_Decathlon/scripts/prepare.py case_ids = get_case_ids_from_dir(source_data_dir, remove_modality=False). I was wondering if I did something wrong or if this is some kind of bug.

Thanks!

Slow training speed

Hello, I'm trying to train the task 16 with Luna dataset, but the training seems very slow, and the GPU is not used for most of the time because it has to wait for the volume to be prepared. How can I increase the volume preparation speed to take advantage of the GPU usage?

Hello, I want to know whether nnDetection support Python3.7 and whether it can be trained with a png format dataset.

Hello I have two questions about my using it.

  1. In the code I found that nnDetection can only support Python>=3.8, but I want to use Colab which can only provide Python 3.7. To solve this question, I modify the config.py and change the ':=' operator to successfully install it. And then I successfully run the nndet_example code. But I haven't try other commands, so I am wondering whether the process and train process can support Python 3.7 except the ':=' operator?
  2. Another question is that whether I can use the png format dataset I found before to train with nnDetection? (The dataset consist of png images and csv file which include the coordinates)

[Question] Low prediction scores

❓ Question

Hi, I've been using nnDetection for quite some time now and in my experiments the predictions score I get in test_predictions are usually quite low (max 0.45 most of the time). From previous detection networks I would get scores around 0.9 or even higher. Is it common to have these scores in nnDetection? Should I train for more epochs? Maybe you have a bit more insight into this than me.
Thanks in advance. The results are good btw, thanks for sharing this approach.

[Bug] ITK ERROR: ITK only supports orthonormal direction cosines. No orthonormal definition found! (SimpleITK >= 2.1.0)

Hello everyone,

Thanks for this great tool :)

💀 Bug

When using nndet_prep [task] --full-check command, the following error occured:

File "<nnDetection>/nndet/utils/check.py", line 213, in _full_check
	img_itk_seq = [load_sitk(cp) for cp in case_paths]
File "<nnDetection>/nndet/utils/check.py", line 213, in <listcomp>
	img_itk_seq = [load_sitk(cp) for cp in case_paths]
File "<nnDetection>/nndet/io/itk.py", line 107, in load_sitk
	return sitk.ReadImage(str(path), **kwargs)
File "<CondaEnv>/lib/python3.9/site-packages/SimpleITK/extra.py", line 346, in ReadImage
	return reader.Execute()
File "<CondaEnv>/lib/python3.9/site-packages/SimpleITK/SimpleITK.py", line 8015, in Execute
	return _SimpleITK.ImageFileReader_Execute(self)
RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /tmp/SimpleITK-build/ITK/Modules/IO/NIFTI/src/itkNiftiImageIO.cxx:1980:
ITK ERROR: ITK only supports orthonormal direction cosines.  No orthonormal definition found!

This was noticed with the nnUNet framework when mixing different versions of SimpleITK an issue was raised (SimpleITK/SimpleITK#1433).

When using SimpleITK 2.0.2, this error does not occur. This seems to be due to recent changes with ITK when handling Nifti headers (see InsightSoftwareConsortium/ITK#2674 for further details and ongoing conversation).

I would think that freezing SimpleITK to SimpleITK < 2.1.0 would temporarily solve the issue.

Best

Environment

Environment Information:

----- PyTorch Information -----
PyTorch Version: 1.9.0
PyTorch Debug: False
PyTorch CUDA: 10.2
PyTorch Backend cudnn: 7605
PyTorch CUDA Arch List: ['sm_37', 'sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'compute_37']
PyTorch Current Device Capability: (7, 0)
PyTorch CUDA available: True

 


----- System Information -----
System NVCC: nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

 

System Arch List: None
System OMP_NUM_THREADS: 1
System CUDA_HOME is None: True
System CPU Count: 6
Python Version: 3.9.6 (default, Jul 30 2021, 16:35:19)
[GCC 7.5.0]

 


----- nnDetection Information -----
det_num_threads 12
det_data is set True
det_models is set True```

[Bug]

Hello,

When I'm preprocessing the data, I run into an error:

File "/opt/nnDetection/nndet/preprocessing/preprocessor.py", line 259, in run_process
data, seg, properties = self.apply_process( File "/opt/nnDetection/nndet/preprocessing/preprocessor.py", line 304, in apply_process data, seg, after = self.resample(
File "/opt/nnDetection/nndet/preprocessing/preprocessor.py", line 379, in resample
data, seg = resample_patient(data,
File "/opt/nnDetection/nndet/preprocessing/resampling.py", line 53, in resample_patient
return nn_preprocessing.resample_patient(data=data, seg=seg, original_spacing=original_spacing,
File "/opt/conda/lib/python3.8/site-packages/nnunet/preprocessing/preprocessing.py", line 102, in resample_patient
seg_reshaped = resample_data_or_seg(seg, new_shape, True, axis, order_seg, do_separate_z, cval=cval_seg,
File "/opt/conda/lib/python3.8/site-packages/nnunet/preprocessing/preprocessing.py", line 193, in resample_data_or_seg
reshaped.append(resize_fn(data[c], new_shape, order, cval=cval, **kwargs)[None])
TypeError: resize_segmentation() got an unexpected keyword argument 'cval'

This is related to the nnunet package?

[Question] Should training take so long?

❓ Question

I prepared and preprocessed dataset Task010_Colon and there were no issues, but when I started training the net by
nndet_train 010 --sweep
it took extremely long time to finish Epoch 0 - almost 9 hours. I am using CUDA 11.2, GPU Nvidia Tesla K40 XL, Pytorch 1.7.1+cu110 and working on an univeristy cluster which should allow my to train my net fast.
Is there anything I can do to make if faster? I'm not sure if it's my environment or is it my nnDetection configuration.

[Question]Replicate result on paper

Hello,

I tried to replicate the results of nndetection on LUNA16 as in your paper, but I can not achieve the same performance marked in the paper (CPM=0.92). In fact what I obtain is CPM=0.84.
Did you use different set of parameter as presented in the paper? Or did you fine tuning the model?

Thanks,

[Question] What's the difference between nnDetection and MedicalDetectionToolkit

❓ Question

Hi, Michael:
Thanks for sharing your nnDetection codebase. It seems very great and has potential for medical detection task.
I am familiar with medicaldetectiontoolkit, so I have two questions and hoping for your reply.

(1) What's main difference between medicaldetectiontoolkit and nnDetection?
As far as I know, when upgrading pytorch version from 0.4.1 to 1.0+, medicaldetectiontoolkit has gained great traning and inference performance. But its owner pfjaeger seems not make it as default branch.

(2) Detailed results on different dataset should be checked carefully.
Instead of most citing paper directly, I think we would better checking its real performance. For example, whether it is 10-fold cross-validation, whether it used lung mask for pre/post processing, whether it is 5 fold model, etc.

Most so called medical image SOTA paper seldom open-sourced its official code. That makes us can not reproduce and testify its real numerical results. By the way, that's why many researchers include me have much respect on Fabian and you guys in DKFZ.

Just as you read the former issue, it claims its results on LUNA16 is 0.125(72.3)0.25(83.8) 0.5(88.7) 1(91.1)2(92.8)4(93.4)8(94.8)average 88.1.

Here is my re-implementation results on LUNA16 with medicaldetectiontoolkit.
(1) Dataset
Traning and validation set: subset 0-7.
Test set: subset 8,9.
(2) Results
(1) 5 Model + 2Dc MaskRCNN (mAP10: 0.6894)
0.125(0.570)0.25(0.659) 0.5(0.776) 1(0.879)2(0.910)4(0.932)8(0.973)average 0.814.
(2) 1 Model + NoTTA + 2Dc MaskRCNN (mAP10: 0.755)
0.125(0.404)0.25(0.605) 0.5(0.709) 1(0.834)2(0.883)4(0.0.915)8(0.937)average 0.755.
(3) 1 Model + NoTTA + 2Dc FasterRCNN (mAP10: 0.5943)
0.125(0.501)0.25(0.632) 0.5(0.767) 1(0.830)2(0.883)4(0.0.915)8(0.937)average 0.782.

There are part of my results on LUNA16 and it is close with medicaldetectiontoolkit author results on LIDC-IDRI.
Table 1 from "Retina U-Net: Embarrassingly Simple Exploitation ofSegmentation Supervision for Medical Object Detection"
https://ml4health.github.io/2019/pdf/232_ml4h_preprint.pdf

Hoping to contact with you in the future.

Best,

How to visualize results?

Hello, do you have any function to visualize the validation results (bounding box prediction for example) on some images?

Thanks

The results of CPMNet in MICCAI2021(song et al.)

Hi, Dr. Michael Baumgartner,
Great work for medical image detection, thank you for reviewing our work (CPMNet in MICCAI2021) in this repo. To better report the results of our work, we list the detailed result: 0.125(72.3)0.25(83.8) 0.5(88.7) 1(91.1)2(92.8)4(93.4)8(94.8)average 88.1. And more recent work, SANet: A slice-aware network for pulmonary nodule detection (TPAMI2021) also report the results on LUNA16.
Best,
Xiangde.

[Question] Train on 2D medical images

❓ Question

Hi!
Is it worth to try experimenting with the nnDetection framework on 2D medical images, or is it designed (as the description says) solely for "3D (volumetric) medical object detection"?

run preprocess.py script encountering a problem 'CUDA error: an illegal memory access was encountered'

the following is output of terminal:
2021-06-05 18:06:10.058 | INFO | nndet.planning.estimator:_estimate_mem_available:153 - Estimating in memory.
2021-06-05 18:06:10.059 | INFO | nndet.planning.estimator:measure:192 - Estimating on cuda:0 with shape [1, 192, 192, 192] and batch size 4 and num_instances 1
2021-06-05 18:06:13.341 | INFO | nndet.planning.estimator:measure:242 - Caught error (If out of memory error do not worry): CUDA error: an illegal memory access was encountered
Traceback (most recent call last):
File "preprocess.py", line 484, in
main()
File "/home/shawnyuen/projects/nnDetection/nndet/utils/check.py", line 58, in wrapper
return func(*args, **kwargs)
File "preprocess.py", line 477, in main
run(OmegaConf.to_container(cfg, resolve=True),
File "preprocess.py", line 404, in run
run_planning_and_process(
File "preprocess.py", line 231, in run_planning_and_process
plan_identifiers = planner.plan_experiment(
File "/home/shawnyuen/projects/nnDetection/nndet/planning/experiment/v001.py", line 43, in plan_experiment
plan_3d = self.plan_base_stage(
File "/home/shawnyuen/projects/nnDetection/nndet/planning/experiment/base.py", line 234, in plan_base_stage
architecture_plan = architecture_planner.plan(
File "/home/shawnyuen/projects/nnDetection/nndet/planning/architecture/boxes/c002.py", line 127, in plan
res = super().plan(
File "/home/shawnyuen/projects/nnDetection/nndet/planning/architecture/boxes/base.py", line 343, in plan
patch_size = self._plan_architecture(
File "/home/shawnyuen/projects/nnDetection/nndet/planning/architecture/boxes/c002.py", line 205, in _plan_architecture
_, fits_in_mem = self.estimator.estimate(
File "/home/shawnyuen/projects/nnDetection/nndet/planning/estimator.py", line 127, in estimate
res = self._estimate_mem_available(
File "/home/shawnyuen/projects/nnDetection/nndet/planning/estimator.py", line 154, in _estimate_mem_available
fixed, dynamic = self.measure(shape=target_shape,
File "/home/shawnyuen/projects/nnDetection/nndet/planning/estimator.py", line 253, in measure
network.cpu()
File "/home/shawnyuen/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 471, in cpu
return self._apply(lambda t: t.cpu())
File "/home/shawnyuen/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/shawnyuen/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/shawnyuen/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "/home/shawnyuen/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 381, in _apply
param_applied = fn(param)
File "/home/shawnyuen/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 471, in
return self._apply(lambda t: t.cpu())
RuntimeError: CUDA error: an illegal memory access was encountered

the following is my commands:
`
python generate_example.py --full --num_processes 12

python preprocess.py 000 -np 2 -npp 2
`

when processing this size [1, 192, 192, 192], error is raised.

[Bug] TypeError: resample_patient() got an unexpected keyword argument 'cval_data'

Hello, when I try to use the nndet_prep on my data to the pre-processing and planning, it comes an error at the end as follows,

_Traceback (most recent call last):
File "/home/zmin/.conda/envs/nndetection/bin/nndet_prep", line 33, in
sys.exit(load_entry_point('nndet', 'console_scripts', 'nndet_prep')())
File "/home/zmin/nnDetection/nndet/utils/check.py", line 58, in wrapper
return func(*args, **kwargs)
File "/home/zmin/nnDetection/scripts/preprocess.py", line 475, in main
run(OmegaConf.to_container(cfg, resolve=True),
File "/home/zmin/nnDetection/scripts/preprocess.py", line 404, in run
run_planning_and_process(
File "/home/zmin/nnDetection/scripts/preprocess.py", line 238, in run_planning_and_process
planner.run_preprocessing(
File "/home/zmin/nnDetection/nndet/planning/experiment/base.py", line 348, in run_preprocessing
preprocessor.run(
File "/home/zmin/nnDetection/nndet/preprocessing/preprocessor.py", line 195, in run
p.starmap(self.run_process,
File "/home/zmin/.conda/envs/nndetection/lib/python3.8/multiprocessing/pool.py", line 372, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "/home/zmin/.conda/envs/nndetection/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self.value
TypeError: resample_patient() got an unexpected keyword argument 'cval_data'

Do you have any idea about this? Let me know if more information is needed.

Thanks!
Zhe

[Question]about inference

Hello!
where is the result folder after inference,I only found a 'pkl' file,
isn't there a nifti file?

[Question] How to set the gpu id to use?

Hi, in nnunet, use CUDA_VISIBLE_DEVICES=1 nnUNet_train .... can set which gpu to use in the training process.
Have tried this in nnDetection, seems not working.
Currently is there a mechanism to set the preferred gpu id in a multi-gpu machine to use in the nnDetection?
Thanks!

Train on LIDC and test on LUNA

Hello, as I read your paper, I understand that you trained your model on a pool of dataset including LIDC, then you test on LUNA16. I have two questions for this:

1/ The figure 2 in the paper means you use all 888 images of LUNA as the test set? If not how did you have this result?
2/ LUNA16 is a subset of LIDC, even though you explained on the paper but I still doubt that two datasets are mostly overlapped, could you explain a bit more on this?

Thanks for your help.

[Question] Steps to Train on LUNA16

Dear all, I want to train the nn_detection on the LUNA16 dataset following the instruction https://github.com/MIC-DKFZ/nnDetection and https://github.com/MIC-DKFZ/nnDetection/blob/main/projects/Task016_Luna/README.md.

To prepare the dataset, I run the command: python prepare.py and got the outputs as shown in below:
image
Inside the processed and raw_splitted folder:
image
image

And then I run the command nndet_unpack and run the command to train the model. The error like:
image

I changed the name of the pkl as
image

Then I rerun the code, another issue occurs:
image

I am wondering what are the steps to train the model on LUNA16 and what I did wrong for the steps? Thank you so much!

Then I changed the path to the Imagedir and saw another error:
image

I only saw the output file ending with "nii.gz".

Run validation on a test set

Hello, I trained 5 folds of the LUNA16 task, now I have another dataset, with images and mask, and I want to run evaluation on this dataset using the trained models. I ran nndet_predict, and obtain the folder test_predictions on those images, now I try to run nndet_eval and the code ask for masks processed in preprocessed/labelsTs.

How can I make this folder?

Thanks,

Van KHoa

Error during training

Bug : During the training phase
File "/anaconda/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 161, in scale
assert outputs.is_cuda or outputs.device.type == 'xla'
AssertionError
Exception ignored in: <function tqdm.del at 0x7f9ba338de50>
Traceback (most recent call last):
File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1145, in del
File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1299, in close
File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1492, in display
File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1148, in str
File "/anaconda/lib/python3.8/site-packages/tqdm/std.py", line 1450, in format_dict
TypeError: cannot unpack non-iterable NoneType object

Environment
Please provide some information about the used environment.
Env from the set up using source and not docker
Cmd : nndet_train 1000 --sweep

It seems the issue is related to the fact that TensorMetric not updated to cuda device. The same issue as adressed on Lightning-AI/pytorch-lightning#2274.

[Question] Training few datasets at the same time

Hello :)
Is it possible to execute multiple trainings at once?
nndet_train 007 --sweep
nndet_train 008 --sweep
nndet_train 010 --sweep
I would execute them by individual scripts in batch mode and outputs would we saved in different files.

KeyError: 'instances' when running nndet_prep on nndet_example dataset

Hello everyone,

When using nndet_prep 000D3 command (on the test dataset), the following error occured:

multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
  File "/anaconda/envs/nnDetection/lib/python3.9/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/anaconda/envs/nnDetection/lib/python3.9/multiprocessing/pool.py", line 51, in starmapstar
    return list(itertools.starmap(args[0], args[1]))
  File "/home/ndebs/code/nndetection/nnDetection/nndet/planning/properties/instance.py", line 155, in analyze_instances_per_case
    props["num_instances"] = count_instances(props, all_classes)
  File "/home/ndebs/code/nndetection/nnDetection/nndet/planning/properties/instance.py", line 177, in count_instances
    instance_classes = list(map(int, props["instances"].values()))
KeyError: 'instances'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/anaconda/envs/nnDetection/bin/nndet_prep", line 33, in <module>
    sys.exit(load_entry_point('nndet', 'console_scripts', 'nndet_prep')())
  File "/home/ndebs/code/nndetection/nnDetection/nndet/utils/check.py", line 58, in wrapper
    return func(*args, **kwargs)
  File "/home/ndebs/code/nndetection/nnDetection/scripts/preprocess.py", line 475, in main
    run(OmegaConf.to_container(cfg, resolve=True),
  File "/home/ndebs/code/nndetection/nnDetection/scripts/preprocess.py", line 394, in run
    run_dataset_analysis(cropped_output_dir=Path(cfg["host"]["cropped_output_dir"]),
  File "/home/ndebs/code/nndetection/nnDetection/scripts/preprocess.py", line 198, in run_dataset_analysis
    _ = analyzer.analyze_dataset(properties)
  File "/home/ndebs/code/nndetection/nnDetection/nndet/planning/analyzer.py", line 80, in analyze_dataset
    props.update(property_fn(self))
  File "/home/ndebs/code/nndetection/nnDetection/nndet/planning/properties/instance.py", line 46, in analyze_instances
    props_per_case = run_analyze_instances(analyzer, all_classes)
  File "/home/ndebs/code/nndetection/nnDetection/nndet/planning/properties/instance.py", line 77, in run_analyze_instances
    props = p.starmap(analyze_instances_per_case, zip(
  File "/anaconda/envs/nnDetection/lib/python3.9/multiprocessing/pool.py", line 372, in starmap
    return self._map_async(func, iterable, starmapstar, chunksize).get()
  File "/anaconda/envs/nnDetection/lib/python3.9/multiprocessing/pool.py", line 771, in get
    raise self._value
KeyError: 'instances'

I am facing the exact same problem when running nndet_prep on my own personal dataset. It seems that the key "instances" is not created in the pickle files in "data/Task000D3_Example/raw_cropped/labelsTr/" (I printed the dictionnary inside caseXX.pkl in order to check).

Do you have any idea how to solve this problem?

Best

Noëlie

Resume training

Hello, is it possible to interrupt the training then resume it?

Thanks

low resolution scheme

Hello, I have some questions about the low resolution scheme:

1/ When it is generated?
2/ How it's used later in training and validation?

Thanks for your help,

Van Khoa

[Question]The error of intall nndetection

❓ Question

Hi! Awesome work :)

Recently, I tried to install nnDetection with pip install -e . , but I got a problem:

ERROR: Command errored out with exit status 1:
command: /home/sensetime/anaconda3/envs/nndet/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/sensetime/Documents/FHwork/code/nnDetection-main/setup.py'"'"'; file='"'"'/home/sensetime/Documents/FHwork/code/nnDetection-main/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps
cwd: /home/sensetime/Documents/FHwork/code/nnDetection-main/
Complete output (106 lines):
......
ERROR: Command errored out with exit status 1: /home/sensetime/anaconda3/envs/nndet/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/home/sensetime/Documents/FHwork/code/nnDetection-main/setup.py'"'"'; file='"'"'/home/sensetime/Documents/FHwork/code/nnDetection-main/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
Screenshot from 2021-08-10 21-02-59
Screenshot from 2021-08-10 21-03-26

Thank you for your help!

Resample mask error

Hello, I reveived this warning and error in the preprocessing stage. This is due to the resample process that make some instances disappear?

2021-07-19 15:06:43.138 | WARNING | nndet.planning.properties.instance:instance_class_and_region_sizes:213 - Instance lost. Found {1: 0, 2: 0, 3: 0, 4: 0} in properties but [1 2 3] in seg of 1_2_840_113654_2_55_446543879.

Thanks

[Question] How to acutally trigger lr model

❓ Question

I have a question regarding the low res model. So plan['trigger_lr1'] has been set to True. But this key is never used during training. So should I basically manually change plan['data_identifier'] += 'lr1' in train.py to force the use of lr model or is there a more intuitive way?

Thanks in advance!

[Question] how to run all five folds?

Hi, when I use the following command
nndet_train xxx --sweep, where xxx denotes the task Id.
Only fold0 is generated in the model folder, how can I run other four folds?
Thank you for your help!

[Question]

Hello, I face this issue when running:

nndet_prep 000

Please cite the following paper when using nnUNet:

Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. "nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation." Nat Methods (2020). https://doi.org/10.1038/s41592-020-01008-z

If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet

/home/[email protected]/.conda/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/metrics/init.py:43: LightningDeprecationWarning: pytorch_lightning.metrics.* module has been renamed to torchmetrics.* and split off to its own package (https://github.com/PyTorchLightning/metrics) since v1.3 and will be removed in v1.5
rank_zero_deprecation(
'det_verbose' environment variable not set. Continue in verbose mode.
/home/[email protected]/.conda/envs/nndet/lib/python3.8/site-packages/hydra/experimental/initialize.py:68: UserWarning: hydra.experimental.initialize_config_module() is no longer experimental. Use hydra.initialize_config_module().
warnings.warn(
/home/[email protected]/.conda/envs/nndet/lib/python3.8/site-packages/hydra/experimental/compose.py:16: UserWarning: hydra.experimental.compose() is no longer experimental. Use hydra.compose()
warnings.warn(
Start dataset info check.
Dataset info check complete.
Start data and label check.
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 13264.72it/s]
Data and label check complete.
Start data and label check.
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 14237.28it/s]
Data and label check complete.
2021-06-25 14:50:28.183 | INFO | scripts.preprocess:run_cropping_and_convert:144 - Running cropping with overwrite False.
2021-06-25 14:50:28.203 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_5
2021-06-25 14:50:28.205 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_5 already exists and overwrite is deactivated
2021-06-25 14:50:28.204 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_1
2021-06-25 14:50:28.204 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_6
2021-06-25 14:50:28.206 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_7
2021-06-25 14:50:28.206 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_6 already exists and overwrite is deactivated
2021-06-25 14:50:28.206 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_7 already exists and overwrite is deactivated
2021-06-25 14:50:28.206 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_1 already exists and overwrite is deactivated
2021-06-25 14:50:28.206 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_8
2021-06-25 14:50:28.207 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_0
2021-06-25 14:50:28.207 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_8 already exists and overwrite is deactivated
2021-06-25 14:50:28.207 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_0 already exists and overwrite is deactivated
2021-06-25 14:50:28.207 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_4
2021-06-25 14:50:28.207 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_9
2021-06-25 14:50:28.208 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_2
2021-06-25 14:50:28.207 | INFO | nndet.preprocessing.crop:process_data:223 - Processing case case_3
2021-06-25 14:50:28.208 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_4 already exists and overwrite is deactivated
2021-06-25 14:50:28.208 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_9 already exists and overwrite is deactivated
2021-06-25 14:50:28.208 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_2 already exists and overwrite is deactivated
2021-06-25 14:50:28.209 | WARNING | nndet.preprocessing.crop:process_data:235 - Case case_3 already exists and overwrite is deactivated
2021-06-25 14:50:28.248 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_2.npz
2021-06-25 14:50:28.248 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_0.npz
2021-06-25 14:50:28.248 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_1.npz
2021-06-25 14:50:28.252 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_3.npz
2021-06-25 14:50:28.711 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_4.npz
2021-06-25 14:50:28.724 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_5.npz
2021-06-25 14:50:28.725 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_6.npz
2021-06-25 14:50:28.729 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_7.npz
2021-06-25 14:50:29.168 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_8.npz
2021-06-25 14:50:29.183 | INFO | scripts.preprocess:check_case:333 - Checking /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr/case_9.npz
2021-06-25 14:50:29.639 | INFO | scripts.preprocess:run_check:311 - Checked 10 cases in /home/[email protected]/datasets/MedDec/Task000D3_Example/raw_cropped/imagesTr
2021-06-25 14:50:29.640 | INFO | scripts.preprocess:run_cropping_and_convert:169 - Crop check successful: Loading check completed
2021-06-25 14:50:29.702 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_2
2021-06-25 14:50:29.702 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_1
2021-06-25 14:50:29.702 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_0
2021-06-25 14:50:29.706 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_3
2021-06-25 14:50:30.200 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_4
2021-06-25 14:50:30.221 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_5
2021-06-25 14:50:30.223 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_6
2021-06-25 14:50:30.244 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_7
2021-06-25 14:50:30.689 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_8
2021-06-25 14:50:30.708 | INFO | nndet.planning.properties.instance:analyze_instances_per_case:153 - Processing instance properties of case case_9
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/[email protected]/.conda/envs/nndet/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home/[email protected]/.conda/envs/nndet/lib/python3.8/multiprocessing/pool.py", line 51, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "/home/[email protected]/code/public/nnDetection/nndet/planning/properties/instance.py", line 155, in analyze_instances_per_case
props["num_instances"] = count_instances(props, all_classes)
File "/home/[email protected]/code/public/nnDetection/nndet/planning/properties/instance.py", line 176, in count_instances
instance_classes = list(map(int, props["instances"].values()))
KeyError: 'instances'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/[email protected]/.conda/envs/nndet/bin/nndet_prep", line 33, in
sys.exit(load_entry_point('nndet', 'console_scripts', 'nndet_prep')())
File "/home/[email protected]/code/public/nnDetection/nndet/utils/check.py", line 58, in wrapper
return func(*args, **kwargs)
File "/home/[email protected]/code/public/nnDetection/scripts/preprocess.py", line 475, in main
run(OmegaConf.to_container(cfg, resolve=True),
File "/home/[email protected]/code/public/nnDetection/scripts/preprocess.py", line 394, in run
run_dataset_analysis(cropped_output_dir=Path(cfg["host"]["cropped_output_dir"]),
File "/home/[email protected]/code/public/nnDetection/scripts/preprocess.py", line 198, in run_dataset_analysis
_ = analyzer.analyze_dataset(properties)
File "/home/[email protected]/code/public/nnDetection/nndet/planning/analyzer.py", line 80, in analyze_dataset
props.update(property_fn(self))
File "/home/[email protected]/code/public/nnDetection/nndet/planning/properties/instance.py", line 46, in analyze_instances
props_per_case = run_analyze_instances(analyzer, all_classes)
File "/home/[email protected]/code/public/nnDetection/nndet/planning/properties/instance.py", line 77, in run_analyze_instances
props = p.starmap(analyze_instances_per_case, zip(
File "/home/[email protected]/.conda/envs/nndet/lib/python3.8/multiprocessing/pool.py", line 372, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "/home/[email protected]/.conda/envs/nndet/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
KeyError: 'instances'

[Question]

❓ Question

I used the script 'nndet/projects/Task016_Luna/scripts/prepare.py' to get the train and label format data, but I met the error below:

File "prepare.py", line 64, in _create_mask
mask = create_circle_mask_itk(data, centers, rads, ndim=3)
File "/opt/code/nndet/nndet/io/itk.py", line 71, in create_circle_mask_itk
return copy_meta_data_itk(image_itk, mask_itk)
File "/opt/code/nndet/nndet/io/itk.py", line 87, in copy_meta_data_itk
raise NotImplementedError("Does not work!")
NotImplementedError: Does not work!

After runing the script, I got the train image data with the right format.,but got none in the dir "/opt/data/Task016_Luna/raw_splitted/labelsTr".

The question is how can I get the label data in the right format , is there any other ways?
I used the docker to install the nndet.

Train several folds at the same time

Hello, I faced a new issue when trying to run a second training on the same PC, within a container. As I googled this is related to the master_port environment chosed by torch lightning. Do you know how can I fix this?

INFO Using 1 GPUs for training
INFO Using None plugins for training
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Using native 16bit precision.
INFO Initialize SWA with swa epoch start 49
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
Traceback (most recent call last):
File "/opt/conda/bin/nndet_train", line 33, in
sys.exit(load_entry_point('nndet', 'console_scripts', 'nndet_train')())
File "/opt/nnDetection/nndet/utils/check.py", line 58, in wrapper
return func(*args, **kwargs)
File "/opt/nnDetection/scripts/train.py", line 69, in train
_train(
File "/opt/nnDetection/scripts/train.py", line 284, in _train
trainer.fit(module, datamodule=datamodule)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self._run(model)
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 712, in _run
self.accelerator.setup_environment()
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 80, in setup_environment
self.training_type_plugin.setup_environment()
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 112, in setup_environment
self.setup_distributed()
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 198, in setup_distributed
self.init_ddp_connection()
File "/opt/conda/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 272, in init_ddp_connection
torch_distrib.init_process_group(self.torch_distributed_backend, rank=global_rank, world_size=world_size)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 461, in init_process_group
store, rank, world_size = next(rendezvous_iterator)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 179, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon, timeout)

Nvidia cuda vs Conda cuda

Hi, why do this library requires cuda installed on the machine, and not use the cuda installed on anaconda?

Segmentation fault

Hello, I ran into this when trying to run training for generated examples:

$nndet_train 000 --sweep

/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/metrics/init.py:43: LightningDeprecationWarning: pytorch_lightning.metrics.* module has been renamed to torchmetrics.* and split off to its own package (https://github.com/PyTorchLightning/metrics) since v1.3 and will be removed in v1.5
rank_zero_deprecation(

Please cite the following paper when using nnUNet:

Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. "nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation." Nat Methods (2020). https://doi.org/10.1038/s41592-020-01008-z

If you have questions or suggestions, feel free to open an issue at https://github.com/MIC-DKFZ/nnUNet

'det_verbose' environment variable not set. Continue in verbose mode.
Overwrites: None
/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/hydra/experimental/initialize.py:68: UserWarning: hydra.experimental.initialize_config_module() is no longer experimental. Use hydra.initialize_config_module().
warnings.warn(
/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/hydra/experimental/compose.py:16: UserWarning: hydra.experimental.compose() is no longer experimental. Use hydra.compose()
warnings.warn(
Found existing folder /home/vankhoa/datasets/MedDec/3ddet/model/Task000D3_Example/RetinaUNetV001_D3V001_3d/fold0, this run will overwrite the results inside that folder
INFO Log file at /home/vankhoa/datasets/MedDec/3ddet/model/Task000D3_Example/RetinaUNetV001_D3V001_3d/fold0/train.log
INFO Using splits /home/vankhoa/datasets/MedDec/Task000D3_Example/preprocessed/splits_final.pkl with fold 0
INFO Architecture overwrites: {} Anchor overwrites: {}
INFO Building architecture according to plan of RetinaUNetV001
INFO Start channels: 32; head channels: 128; fpn channels: 128
INFO Discarding anchor generator kwargs {'stride': 1}
INFO Building:: encoder Encoder: {}
INFO Building:: decoder UFPNModular: {'min_out_channels': 8, 'upsampling_mode': 'transpose', 'num_lateral': 1, 'norm_lateral': False, 'activation_lateral': False, 'num_out': 1, 'norm_out': False, 'activation_out': False}
INFO Running ATSS Matching with num_candidates=4 and center_in_gt False.
INFO Building:: classifier BCECLassifier: {'num_convs': 1, 'norm_channels_per_group': 16, 'norm_affine': True, 'reduction': 'mean', 'loss_weight': 1.0, 'prior_prob': 0.01}
INFO Init classifier weights: prior prob 0.01
INFO Building:: regressor GIoURegressor: {'num_convs': 1, 'norm_channels_per_group': 16, 'norm_affine': True, 'reduction': 'sum', 'loss_weight': 1.0, 'learn_scale': True}
INFO Learning level specific scalar in regressor
INFO Overwriting regressor conv weight init
INFO Building:: head DetectionHeadHNMNative: {} sampler HardNegativeSamplerBatched: {'batch_size_per_image': 32, 'positive_fraction': 0.33, 'pool_size': 20, 'min_neg': 1}
INFO Sampling hard negatives on a per batch basis
INFO Building:: segmenter DiCESegmenterFgBg {'dice_kwargs': {'batch_dice': True}}
INFO Running batch dice True and do bg False in dice loss.
INFO Model Inference Summary:
detections_per_img: 100
score_thresh: 0
topk_candidates: 10000
remove_small_boxes: 0.01
nms_thresh: 0.6
/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:360: UserWarning: Checkpoint directory /home/vankhoa/datasets/MedDec/3ddet/model/Task000D3_Example/RetinaUNetV001_D3V001_3d/fold0 exists and is not empty.
rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
INFO Using 1 GPUs for training
INFO Using None plugins for training
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Using native 16bit precision.
INFO Initialize SWA with swa epoch start 49
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
INFO Augmentation: BaseMoreAug transforms and base_more params
INFO Loading network patch size [96 80 80] and generator patch size [175, 156, 167]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO Running: initial_lr 0.01 weight_decay 3e-05 SGD with momentum 0.9 and nesterov True

| Name                                                 | Type                    | Params | In sizes                                                                                                    | Out sizes                                                                                                 

0 | model | BaseRetinaNet | 13.5 M | [1, 1, 96, 80, 80] | ['?', [[2369250, 6]], '?']
1 | model.encoder | Encoder | 8.5 M | [1, 1, 96, 80, 80] | [[1, 32, 96, 80, 80], [1, 64, 48, 40, 40], [1, 128, 24, 20, 20], [1, 256, 12, 10, 10], [1, 320, 6, 5, 5]]
2 | model.encoder.stages | ModuleList | 8.5 M | ? | ?
3 | model.encoder.stages.0 | StackedConvBlock2 | 28.6 K | [1, 1, 96, 80, 80] | [1, 32, 96, 80, 80]
4 | model.encoder.stages.0.convs | Sequential | 28.6 K | [1, 1, 96, 80, 80] | [1, 32, 96, 80, 80]
5 | model.encoder.stages.0.convs.0 | Sequential | 28.6 K | [1, 1, 96, 80, 80] | [1, 32, 96, 80, 80]
6 | model.encoder.stages.0.convs.0.0 | ConvInstanceRelu | 928 | [1, 1, 96, 80, 80] | [1, 32, 96, 80, 80]
7 | model.encoder.stages.0.convs.0.0.conv | Conv3d | 864 | [1, 1, 96, 80, 80] | [1, 32, 96, 80, 80]
8 | model.encoder.stages.0.convs.0.0.norm | InstanceNorm3d | 64 | [1, 32, 96, 80, 80] | [1, 32, 96, 80, 80]
9 | model.encoder.stages.0.convs.0.0.act | ReLU | 0 | [1, 32, 96, 80, 80] | [1, 32, 96, 80, 80]
10 | model.encoder.stages.0.convs.0.1 | ConvInstanceRelu | 27.7 K | [1, 32, 96, 80, 80] | [1, 32, 96, 80, 80]
11 | model.encoder.stages.0.convs.0.1.conv | Conv3d | 27.6 K | [1, 32, 96, 80, 80] | [1, 32, 96, 80, 80]
12 | model.encoder.stages.0.convs.0.1.norm | InstanceNorm3d | 64 | [1, 32, 96, 80, 80] | [1, 32, 96, 80, 80]
13 | model.encoder.stages.0.convs.0.1.act | ReLU | 0 | [1, 32, 96, 80, 80] | [1, 32, 96, 80, 80]
14 | model.encoder.stages.1 | StackedConvBlock2 | 166 K | [1, 32, 96, 80, 80] | [1, 64, 48, 40, 40]
15 | model.encoder.stages.1.convs | Sequential | 166 K | [1, 32, 96, 80, 80] | [1, 64, 48, 40, 40]
16 | model.encoder.stages.1.convs.0 | Sequential | 166 K | [1, 32, 96, 80, 80] | [1, 64, 48, 40, 40]
17 | model.encoder.stages.1.convs.0.0 | ConvInstanceRelu | 55.4 K | [1, 32, 96, 80, 80] | [1, 64, 48, 40, 40]
18 | model.encoder.stages.1.convs.0.0.conv | Conv3d | 55.3 K | [1, 32, 96, 80, 80] | [1, 64, 48, 40, 40]
19 | model.encoder.stages.1.convs.0.0.norm | InstanceNorm3d | 128 | [1, 64, 48, 40, 40] | [1, 64, 48, 40, 40]
20 | model.encoder.stages.1.convs.0.0.act | ReLU | 0 | [1, 64, 48, 40, 40] | [1, 64, 48, 40, 40]
21 | model.encoder.stages.1.convs.0.1 | ConvInstanceRelu | 110 K | [1, 64, 48, 40, 40] | [1, 64, 48, 40, 40]
22 | model.encoder.stages.1.convs.0.1.conv | Conv3d | 110 K | [1, 64, 48, 40, 40] | [1, 64, 48, 40, 40]
23 | model.encoder.stages.1.convs.0.1.norm | InstanceNorm3d | 128 | [1, 64, 48, 40, 40] | [1, 64, 48, 40, 40]
24 | model.encoder.stages.1.convs.0.1.act | ReLU | 0 | [1, 64, 48, 40, 40] | [1, 64, 48, 40, 40]
25 | model.encoder.stages.2 | StackedConvBlock2 | 664 K | [1, 64, 48, 40, 40] | [1, 128, 24, 20, 20]
26 | model.encoder.stages.2.convs | Sequential | 664 K | [1, 64, 48, 40, 40] | [1, 128, 24, 20, 20]
27 | model.encoder.stages.2.convs.0 | Sequential | 664 K | [1, 64, 48, 40, 40] | [1, 128, 24, 20, 20]
28 | model.encoder.stages.2.convs.0.0 | ConvInstanceRelu | 221 K | [1, 64, 48, 40, 40] | [1, 128, 24, 20, 20]
29 | model.encoder.stages.2.convs.0.0.conv | Conv3d | 221 K | [1, 64, 48, 40, 40] | [1, 128, 24, 20, 20]
30 | model.encoder.stages.2.convs.0.0.norm | InstanceNorm3d | 256 | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
31 | model.encoder.stages.2.convs.0.0.act | ReLU | 0 | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
32 | model.encoder.stages.2.convs.0.1 | ConvInstanceRelu | 442 K | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
33 | model.encoder.stages.2.convs.0.1.conv | Conv3d | 442 K | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
34 | model.encoder.stages.2.convs.0.1.norm | InstanceNorm3d | 256 | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
35 | model.encoder.stages.2.convs.0.1.act | ReLU | 0 | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
36 | model.encoder.stages.3 | StackedConvBlock2 | 2.7 M | [1, 128, 24, 20, 20] | [1, 256, 12, 10, 10]
37 | model.encoder.stages.3.convs | Sequential | 2.7 M | [1, 128, 24, 20, 20] | [1, 256, 12, 10, 10]
38 | model.encoder.stages.3.convs.0 | Sequential | 2.7 M | [1, 128, 24, 20, 20] | [1, 256, 12, 10, 10]
39 | model.encoder.stages.3.convs.0.0 | ConvInstanceRelu | 885 K | [1, 128, 24, 20, 20] | [1, 256, 12, 10, 10]
40 | model.encoder.stages.3.convs.0.0.conv | Conv3d | 884 K | [1, 128, 24, 20, 20] | [1, 256, 12, 10, 10]
41 | model.encoder.stages.3.convs.0.0.norm | InstanceNorm3d | 512 | [1, 256, 12, 10, 10] | [1, 256, 12, 10, 10]
42 | model.encoder.stages.3.convs.0.0.act | ReLU | 0 | [1, 256, 12, 10, 10] | [1, 256, 12, 10, 10]
43 | model.encoder.stages.3.convs.0.1 | ConvInstanceRelu | 1.8 M | [1, 256, 12, 10, 10] | [1, 256, 12, 10, 10]
44 | model.encoder.stages.3.convs.0.1.conv | Conv3d | 1.8 M | [1, 256, 12, 10, 10] | [1, 256, 12, 10, 10]
45 | model.encoder.stages.3.convs.0.1.norm | InstanceNorm3d | 512 | [1, 256, 12, 10, 10] | [1, 256, 12, 10, 10]
46 | model.encoder.stages.3.convs.0.1.act | ReLU | 0 | [1, 256, 12, 10, 10] | [1, 256, 12, 10, 10]
47 | model.encoder.stages.4 | StackedConvBlock2 | 5.0 M | [1, 256, 12, 10, 10] | [1, 320, 6, 5, 5]
48 | model.encoder.stages.4.convs | Sequential | 5.0 M | [1, 256, 12, 10, 10] | [1, 320, 6, 5, 5]
49 | model.encoder.stages.4.convs.0 | Sequential | 5.0 M | [1, 256, 12, 10, 10] | [1, 320, 6, 5, 5]
50 | model.encoder.stages.4.convs.0.0 | ConvInstanceRelu | 2.2 M | [1, 256, 12, 10, 10] | [1, 320, 6, 5, 5]
51 | model.encoder.stages.4.convs.0.0.conv | Conv3d | 2.2 M | [1, 256, 12, 10, 10] | [1, 320, 6, 5, 5]
52 | model.encoder.stages.4.convs.0.0.norm | InstanceNorm3d | 640 | [1, 320, 6, 5, 5] | [1, 320, 6, 5, 5]
53 | model.encoder.stages.4.convs.0.0.act | ReLU | 0 | [1, 320, 6, 5, 5] | [1, 320, 6, 5, 5]
54 | model.encoder.stages.4.convs.0.1 | ConvInstanceRelu | 2.8 M | [1, 320, 6, 5, 5] | [1, 320, 6, 5, 5]
55 | model.encoder.stages.4.convs.0.1.conv | Conv3d | 2.8 M | [1, 320, 6, 5, 5] | [1, 320, 6, 5, 5]
56 | model.encoder.stages.4.convs.0.1.norm | InstanceNorm3d | 640 | [1, 320, 6, 5, 5] | [1, 320, 6, 5, 5]
57 | model.encoder.stages.4.convs.0.1.act | ReLU | 0 | [1, 320, 6, 5, 5] | [1, 320, 6, 5, 5]
58 | model.decoder | UFPNModular | 2.4 M | [[1, 32, 96, 80, 80], [1, 64, 48, 40, 40], [1, 128, 24, 20, 20], [1, 256, 12, 10, 10], [1, 320, 6, 5, 5]] | [[1, 64, 96, 80, 80], [1, 128, 48, 40, 40], [1, 128, 24, 20, 20], [1, 128, 12, 10, 10], [1, 128, 6, 5, 5]]
59 | model.decoder.lateral | ModuleDict | 100 K | ? | ?
60 | model.decoder.lateral.P0 | Sequential | 2.1 K | [1, 32, 96, 80, 80] | [1, 64, 96, 80, 80]
61 | model.decoder.lateral.P0.0 | ConvInstanceRelu | 2.1 K | [1, 32, 96, 80, 80] | [1, 64, 96, 80, 80]
62 | model.decoder.lateral.P0.0.conv | Conv3d | 2.1 K | [1, 32, 96, 80, 80] | [1, 64, 96, 80, 80]
63 | model.decoder.lateral.P1 | Sequential | 8.3 K | [1, 64, 48, 40, 40] | [1, 128, 48, 40, 40]
64 | model.decoder.lateral.P1.0 | ConvInstanceRelu | 8.3 K | [1, 64, 48, 40, 40] | [1, 128, 48, 40, 40]
65 | model.decoder.lateral.P1.0.conv | Conv3d | 8.3 K | [1, 64, 48, 40, 40] | [1, 128, 48, 40, 40]
66 | model.decoder.lateral.P2 | Sequential | 16.5 K | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
67 | model.decoder.lateral.P2.0 | ConvInstanceRelu | 16.5 K | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
68 | model.decoder.lateral.P2.0.conv | Conv3d | 16.5 K | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
69 | model.decoder.lateral.P3 | Sequential | 32.9 K | [1, 256, 12, 10, 10] | [1, 128, 12, 10, 10]
70 | model.decoder.lateral.P3.0 | ConvInstanceRelu | 32.9 K | [1, 256, 12, 10, 10] | [1, 128, 12, 10, 10]
71 | model.decoder.lateral.P3.0.conv | Conv3d | 32.9 K | [1, 256, 12, 10, 10] | [1, 128, 12, 10, 10]
72 | model.decoder.lateral.P4 | Sequential | 41.1 K | [1, 320, 6, 5, 5] | [1, 128, 6, 5, 5]
73 | model.decoder.lateral.P4.0 | ConvInstanceRelu | 41.1 K | [1, 320, 6, 5, 5] | [1, 128, 6, 5, 5]
74 | model.decoder.lateral.P4.0.conv | Conv3d | 41.1 K | [1, 320, 6, 5, 5] | [1, 128, 6, 5, 5]
75 | model.decoder.out | ModuleDict | 1.9 M | ? | ?
76 | model.decoder.out.P0 | Sequential | 110 K | [1, 64, 96, 80, 80] | [1, 64, 96, 80, 80]
77 | model.decoder.out.P0.0 | ConvInstanceRelu | 110 K | [1, 64, 96, 80, 80] | [1, 64, 96, 80, 80]
78 | model.decoder.out.P0.0.conv | Conv3d | 110 K | [1, 64, 96, 80, 80] | [1, 64, 96, 80, 80]
79 | model.decoder.out.P1 | Sequential | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
80 | model.decoder.out.P1.0 | ConvInstanceRelu | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
81 | model.decoder.out.P1.0.conv | Conv3d | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
82 | model.decoder.out.P2 | Sequential | 442 K | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
83 | model.decoder.out.P2.0 | ConvInstanceRelu | 442 K | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
84 | model.decoder.out.P2.0.conv | Conv3d | 442 K | [1, 128, 24, 20, 20] | [1, 128, 24, 20, 20]
85 | model.decoder.out.P3 | Sequential | 442 K | [1, 128, 12, 10, 10] | [1, 128, 12, 10, 10]
86 | model.decoder.out.P3.0 | ConvInstanceRelu | 442 K | [1, 128, 12, 10, 10] | [1, 128, 12, 10, 10]
87 | model.decoder.out.P3.0.conv | Conv3d | 442 K | [1, 128, 12, 10, 10] | [1, 128, 12, 10, 10]
88 | model.decoder.out.P4 | Sequential | 442 K | [1, 128, 6, 5, 5] | [1, 128, 6, 5, 5]
89 | model.decoder.out.P4.0 | ConvInstanceRelu | 442 K | [1, 128, 6, 5, 5] | [1, 128, 6, 5, 5]
90 | model.decoder.out.P4.0.conv | Conv3d | 442 K | [1, 128, 6, 5, 5] | [1, 128, 6, 5, 5]
91 | model.decoder.up | ModuleDict | 459 K | ? | ?
92 | model.decoder.up.P1 | ConvInstanceRelu | 65.6 K | [1, 128, 48, 40, 40] | [1, 64, 96, 80, 80]
93 | model.decoder.up.P1.conv | ConvTranspose3d | 65.6 K | [1, 128, 48, 40, 40] | [1, 64, 96, 80, 80]
94 | model.decoder.up.P2 | ConvInstanceRelu | 131 K | [1, 128, 24, 20, 20] | [1, 128, 48, 40, 40]
95 | model.decoder.up.P2.conv | ConvTranspose3d | 131 K | [1, 128, 24, 20, 20] | [1, 128, 48, 40, 40]
96 | model.decoder.up.P3 | ConvInstanceRelu | 131 K | [1, 128, 12, 10, 10] | [1, 128, 24, 20, 20]
97 | model.decoder.up.P3.conv | ConvTranspose3d | 131 K | [1, 128, 12, 10, 10] | [1, 128, 24, 20, 20]
98 | model.decoder.up.P4 | ConvInstanceRelu | 131 K | [1, 128, 6, 5, 5] | [1, 128, 12, 10, 10]
99 | model.decoder.up.P4.conv | ConvTranspose3d | 131 K | [1, 128, 6, 5, 5] | [1, 128, 12, 10, 10]
100 | model.head | DetectionHeadHNMNative | 2.5 M | [[1, 128, 48, 40, 40], [1, 128, 24, 20, 20], [1, 128, 12, 10, 10], [1, 128, 6, 5, 5]] | ?
101 | model.head.classifier | BCECLassifier | 1.1 M | [1, 128, 48, 40, 40] | [1, 2073600, 2]
102 | model.head.classifier.conv_internal | Sequential | 885 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
103 | model.head.classifier.conv_internal.c_in | ConvGroupRelu | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
104 | model.head.classifier.conv_internal.c_in.conv | Conv3d | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
105 | model.head.classifier.conv_internal.c_in.norm | GroupNorm | 256 | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
106 | model.head.classifier.conv_internal.c_in.act | ReLU | 0 | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
107 | model.head.classifier.conv_internal.c_internal0 | ConvGroupRelu | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
108 | model.head.classifier.conv_internal.c_internal0.conv | Conv3d | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
109 | model.head.classifier.conv_internal.c_internal0.norm | GroupNorm | 256 | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
110 | model.head.classifier.conv_internal.c_internal0.act | ReLU | 0 | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
111 | model.head.classifier.conv_out | ConvGroupRelu | 186 K | [1, 128, 48, 40, 40] | [1, 54, 48, 40, 40]
112 | model.head.classifier.conv_out.conv | Conv3d | 186 K | [1, 128, 48, 40, 40] | [1, 54, 48, 40, 40]
113 | model.head.classifier.loss | BCEWithLogitsLossOneHot | 0 | ? | ?
114 | model.head.classifier.logits_convert_fn | Sigmoid | 0 | ? | ?
115 | model.head.regressor | GIoURegressor | 1.4 M | [1, 128, 48, 40, 40] | [1, 2073600, 6]
116 | model.head.regressor.conv_internal | Sequential | 885 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
117 | model.head.regressor.conv_internal.c_in | ConvGroupRelu | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
118 | model.head.regressor.conv_internal.c_in.conv | Conv3d | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
119 | model.head.regressor.conv_internal.c_in.norm | GroupNorm | 256 | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
120 | model.head.regressor.conv_internal.c_in.act | ReLU | 0 | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
121 | model.head.regressor.conv_internal.c_internal0 | ConvGroupRelu | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
122 | model.head.regressor.conv_internal.c_internal0.conv | Conv3d | 442 K | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
123 | model.head.regressor.conv_internal.c_internal0.norm | GroupNorm | 256 | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
124 | model.head.regressor.conv_internal.c_internal0.act | ReLU | 0 | [1, 128, 48, 40, 40] | [1, 128, 48, 40, 40]
125 | model.head.regressor.conv_out | ConvGroupRelu | 560 K | [1, 128, 48, 40, 40] | [1, 162, 48, 40, 40]
126 | model.head.regressor.conv_out.conv | Conv3d | 560 K | [1, 128, 48, 40, 40] | [1, 162, 48, 40, 40]
127 | model.head.regressor.scales | ModuleList | 4 | ? | ?
128 | model.head.regressor.scales.0 | Scale | 1 | [1, 162, 48, 40, 40] | [1, 162, 48, 40, 40]
129 | model.head.regressor.scales.1 | Scale | 1 | [1, 162, 24, 20, 20] | [1, 162, 24, 20, 20]
130 | model.head.regressor.scales.2 | Scale | 1 | [1, 162, 12, 10, 10] | [1, 162, 12, 10, 10]
131 | model.head.regressor.scales.3 | Scale | 1 | [1, 162, 6, 5, 5] | [1, 162, 6, 5, 5]
132 | model.head.regressor.loss | GIoULoss | 0 | ? | ?
133 | model.anchor_generator | AnchorGenerator3DS | 0 | [[1, 1, 96, 80, 80], [[1, 128, 48, 40, 40], [1, 128, 24, 20, 20], [1, 128, 12, 10, 10], [1, 128, 6, 5, 5]]] | [[2369250, 6]]
134 | model.segmenter | DiCESegmenterFgBg | 130 | [[1, 64, 96, 80, 80], [1, 128, 48, 40, 40], [1, 128, 24, 20, 20], [1, 128, 12, 10, 10], [1, 128, 6, 5, 5]] | ?
135 | model.segmenter.conv_out | ConvInstanceRelu | 130 | [1, 64, 96, 80, 80] | [1, 2, 96, 80, 80]
136 | model.segmenter.conv_out.conv | Conv3d | 130 | [1, 64, 96, 80, 80] | [1, 2, 96, 80, 80]
137 | model.segmenter.dice_loss | SoftDiceLoss | 0 | ? | ?
138 | model.segmenter.dice_loss.nonlin | Softmax | 0 | ? | ?
139 | model.segmenter.ce_loss | CrossEntropyLoss | 0 | ? | ?
140 | model.segmenter.logits_convert_fn | Softmax | 0 | ? | ?
141 | pre_trafo | Compose | 0 | ? | ?
142 | pre_trafo.transforms | ModuleList | 0 | ? | ?
143 | pre_trafo.transforms.0 | FindInstances | 0 | ? | ?
144 | pre_trafo.transforms.1 | Instances2Boxes | 0 | ? | ?
145 | pre_trafo.transforms.2 | Instances2Segmentation | 0 | ? | ?

13.5 M Trainable params
0 Non-trainable params
13.5 M Total params
53.800 Total estimated model params size (MB)
Validation sanity check: 0it [00:00, ?it/s]INFO Using validation DataLoader3DOffset with {}
INFO Building Sampling Cache for Dataloder
Sampling Cache: 100%|█████████████████████████████| 2/2 [00:00<00:00, 41.66it/s]
INFO Using 5 num_processes and 2 num_cached_per_queue for augmentation.?, ?it/s]
INFO VALIDATION KEYS:
odict_keys(['case_0', 'case_7'])
Validation sanity check: 0%| | 0/10 [00:00<?, ?it/s]using pin_memory on device 0
Traceback (most recent call last):
File "/home/vankhoa/anaconda3/envs/nndet/bin/nndet_train", line 33, in
sys.exit(load_entry_point('nndet', 'console_scripts', 'nndet_train')())
File "/home/vankhoa/code/Median/public/nnDetection/nndet/utils/check.py", line 58, in wrapper
return func(*args, **kwargs)
File "/home/vankhoa/code/Median/public/nnDetection/scripts/train.py", line 69, in train
_train(
File "/home/vankhoa/code/Median/public/nnDetection/scripts/train.py", line 284, in _train
trainer.fit(module, datamodule=datamodule)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self._run(model)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 756, in _run
self.dispatch()
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 797, in dispatch
self.accelerator.start_training(self)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 807, in run_stage
return self.run_train()
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 842, in run_train
self.run_sanity_check(self.lightning_module)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1107, in run_sanity_check
self.run_evaluation()
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 962, in run_evaluation
output = self.evaluation_loop.evaluation_step(batch, batch_idx, dataloader_idx)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 174, in evaluation_step
output = self.trainer.accelerator.validation_step(args)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 226, in validation_step
return self.training_type_plugin.validation_step(*args)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 326, in validation_step
return self.model(*args, **kwargs)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 57, in forward
output = self.module.validation_step(*inputs, **kwargs)
File "/home/vankhoa/code/Median/public/nnDetection/nndet/ptmodule/retinaunet/base.py", line 172, in validation_step
losses, prediction = self.model.train_step(
File "/home/vankhoa/code/Median/public/nnDetection/nndet/core/retina.py", line 137, in train_step
prediction = self.postprocess_for_inference(
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/home/vankhoa/code/Median/public/nnDetection/nndet/core/retina.py", line 178, in postprocess_for_inference
boxes, probs, labels = self.postprocess_detections(
File "/home/vankhoa/code/Median/public/nnDetection/nndet/core/retina.py", line 317, in postprocess_detections
boxes, probs, labels = self.postprocess_detections_single_image(boxes, probs, image_shape)
File "/home/vankhoa/code/Median/public/nnDetection/nndet/core/retina.py", line 366, in postprocess_detections_single_image
keep = box_utils.batched_nms(boxes, probs, labels, self.nms_thresh)
File "/home/vankhoa/code/Median/public/nnDetection/nndet/core/boxes/nms.py", line 101, in batched_nms
return nms(boxes_for_nms, scores, iou_threshold)
File "/home/vankhoa/anaconda3/envs/nndet/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 135, in decorate_autocast
return func(*args, **kwargs)
File "/home/vankhoa/code/Median/public/nnDetection/nndet/core/boxes/nms.py", line 73, in nms
return nms_fn(boxes.float(), scores.float(), iou_threshold)
RuntimeError: CUDA error: invalid device function
Segmentation fault (core dumped)

[Question] Displaying results not working correctly

Hello,
when using nndet_boxes2nii
nndet_boxes2nii 010 RetinaUNetV001_D3V001_3d

I get directory with nii.gz and json files, but when displaying them all I get is all black nifiti file. Json files seems to be correct
Example:
{ "1": { "score": 0.5186675190925598, "label": 0, "box": [ 16, 199, 24, 245, 269, 322 ] }, "2": { "score": 0.9923214912414551, "label": 0, "box": [ 27, 272, 43, 360, 297, 396 ] } }
Should files with results be displayed in different way than input nifti images?

CUDNN_STATUS_INTERNAL_ERROR during backward

Hi. This is an awesome repo.
I tried to run training on RibFrac using nnDet and run into the following issue:
The training run smoothly for 6 epochs, and suddenly broke and the cudnn error was thrown.
So I rebooted the machine and tried to rerun but got the same error like

'
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 499, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 738, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 434, in optimizer_step
model_ref.optimizer_step(
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1403, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 214, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 134, in __optimizer_step
trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 325, in optimizer_step
make_optimizer_step = self.precision_plugin.pre_optimizer_step(
File "/home/whose/miniconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 93, in pre_optimizer_step
result = lambda_closure()
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 732, in train_step_and_backward_closure
result = self.training_step_and_backward(
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 836, in training_step_and_backward
self.backward(result, optimizer, opt_idx)
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 869, in backward
result.closure_loss = self.trainer.accelerator.backward(
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 308, in backward
output = self.precision_plugin.backward(
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 62, in backward
closure_loss = super().backward(model, closure_loss, optimizer, opt_idx, should_accumulate, *args, **kwargs)
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 79, in backward
model.backward(closure_loss, optimizer, opt_idx)
File "/home/whos/miniconda3/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1275, in backward
loss.backward(*args, **kwargs)
File "/home/whos/miniconda3/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/whos/miniconda3/lib/python3.8/site-packages/torch/autograd/init.py", line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.

import torch
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.allow_tf32 = True
data = torch.randn([4, 32, 112, 192, 160], dtype=torch.half, device='cuda', requires_grad=True)
net = torch.nn.Conv3d(32, 32, kernel_size=[3, 3, 3], padding=[1, 1, 1], stride=[1, 1, 1], dilation=[1, 1, 1], groups=1)
net = net.cuda().half()
out = net(data)
out.backward(torch.randn_like(out))
torch.cuda.synchronize()

ConvolutionParams
data_type = CUDNN_DATA_HALF
padding = [1, 1, 1]
stride = [1, 1, 1]
dilation = [1, 1, 1]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 0x7fc3d00a8de0
type = CUDNN_DATA_HALF
nbDims = 5
dimA = 4, 32, 112, 192, 160,
strideA = 110100480, 3440640, 30720, 160, 1,
output: TensorDescriptor 0x7fc3d00aa920
type = CUDNN_DATA_HALF
nbDims = 5
dimA = 4, 32, 112, 192, 160,
strideA = 110100480, 3440640, 30720, 160, 1,
weight: FilterDescriptor 0x7fc3d00ab420
type = CUDNN_DATA_HALF
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 5
dimA = 32, 32, 3, 3, 3,
Pointer addresses:
input: 0x7fc5de000000
output: 0x7fc614000000
weight: 0x7fc4597c9c00
Additional pointer addresses:
grad_output: 0x7fc614000000
grad_input: 0x7fc5de000000
Backward data algorithm: 3

ConvolutionParams
data_type = CUDNN_DATA_HALF
padding = [1, 1, 1]
stride = [1, 1, 1]
dilation = [1, 1, 1]
groups = 1
deterministic = false
allow_tf32 = true
input: TensorDescriptor 0x7fc3d00a8de0
type = CUDNN_DATA_HALF
nbDims = 5
dimA = 4, 32, 112, 192, 160,
strideA = 110100480, 3440640, 30720, 160, 1,
output: TensorDescriptor 0x7fc3d00aa920
type = CUDNN_DATA_HALF
nbDims = 5
dimA = 4, 32, 112, 192, 160,
strideA = 110100480, 3440640, 30720, 160, 1,
weight: FilterDescriptor 0x7fc3d00ab420
type = CUDNN_DATA_HALF
tensor_format = CUDNN_TENSOR_NCHW
nbDims = 5
dimA = 32, 32, 3, 3, 3,
Pointer addresses:
input: 0x7fc5de000000
output: 0x7fc614000000
weight: 0x7fc4597c9c00
Additional pointer addresses:
grad_output: 0x7fc614000000
grad_input: 0x7fc5de000000
Backward data algorithm: 3

Exception ignored in: <function tqdm.del at 0x7fc77f90a3a0>
Traceback (most recent call last):
File "/home/whos/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1128, in del
File "/home/whos/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1341, in close
File "/home/whos/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1520, in display
File "/home/whos/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1131, in repr
File "/home/whos/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1481, in format_dict
TypeError: cannot unpack non-iterable NoneType object
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered
Exception raised from create_event_internal at /pytorch/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fc78556f8b2 in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xad2 (0x7fc7857c1952 in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7fc78555ab7d in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #3: std::vector<c10d::Reducer::Bucket, std::allocatorc10d::Reducer::Bucket >::~vector() + 0x312 (0x7fc821161912 in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #4: c10d::Reducer::~Reducer() + 0x342 (0x7fc8211601f2 in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #5: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7fc8211340a2 in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7fc820b2a9f6 in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #7: + 0x8c1e4f (0x7fc821135e4f in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #8: + 0x2c2c60 (0x7fc820b36c60 in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #9: + 0x2c3dce (0x7fc820b37dce in /home/whos/miniconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #10: + 0x1289f5 (0x55f6270d59f5 in /home/whos/miniconda3/bin/python)
frame #11: + 0x1c0004 (0x55f62716d004 in /home/whos/miniconda3/bin/python)
frame #12: + 0x128926 (0x55f6270d5926 in /home/whos/miniconda3/bin/python)
frame #13: + 0x1c0004 (0x55f62716d004 in /home/whos/miniconda3/bin/python)
frame #14: + 0x128746 (0x55f6270d5746 in /home/whos/miniconda3/bin/python)
frame #15: + 0x1c0004 (0x55f62716d004 in /home/whos/miniconda3/bin/python)
frame #16: + 0x1289f5 (0x55f6270d59f5 in /home/whos/miniconda3/bin/python)
frame #17: + 0x1c0004 (0x55f62716d004 in /home/whos/miniconda3/bin/python)
frame #18: + 0x128a2a (0x55f6270d5a2a in /home/whos/miniconda3/bin/python)
frame #19: + 0x11d332 (0x55f6270ca332 in /home/whos/miniconda3/bin/python)
frame #20: + 0x13c255 (0x55f6270e9255 in /home/whos/miniconda3/bin/python)
frame #21: _PyGC_CollectNoFail + 0x2a (0x55f6271e285a in /home/whos/miniconda3/bin/python)
frame #22: PyImport_Cleanup + 0x295 (0x55f6271f94d5 in /home/whos/miniconda3/bin/python)
frame #23: Py_FinalizeEx + 0x7d (0x55f6271f968d in /home/whos/miniconda3/bin/python)
frame #24: Py_RunMain + 0x110 (0x55f6271fbb90 in /home/whos/miniconda3/bin/python)
frame #25: Py_BytesMain + 0x39 (0x55f6271fbd19 in /home/whos/miniconda3/bin/python)
frame #26: __libc_start_main + 0xf0 (0x7fc8239eb840 in /lib/x86_64-linux-gnu/libc.so.6)
frame #27: + 0x1dee93 (0x55f62718be93 in /home/whos/miniconda3/bin/python)
'

When I run the code snippet as suggested, it is fine.
I used a server with 250G memory and some V100 GPUs.
Could you please help to locate where the problem is?
Thank you so much.

[Training time and anchors depth]

Question

Two questions :

  • The training is supposed to last for 5 day (more or less) but training the retina-u-net on my data the training for -epochs : 100, batch size : 10, iterations per batch : 200- takes only 8 hours. The nbr of total iterations 'epoch x iterations per batch x batch size' is only " times the total number of iteratons of my training with retina-u-net. Would you have any idea of the things the could explain this time difference ?
    ( I checked that the GPU was well used and that the problem wasn't coming from there )

  • The planner gives as output :
    planner_id': 'D3V001', 'network_dim': 3, 'dataloader_kwargs': {}, 'data_identifier': 'D3V001_3d', 'postprocessing': {}, 'patch_size': array([ 64, 128, 128]), 'batch_size': 4, 'architecture': {'arch_name': 'RetinaUNetV001', 'max_channels': 320, 'start_channels': 32, 'fpn_channels': 128, 'head_channels': 128, 'classifier_classes': 2, 'seg_classes': 2, 'in_channels': 1, 'dim': 3, 'class_weight': [0.3333333333333333, 0.6666666666666667, 0.0], 'conv_kernels': [[1, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]], 'decoder_levels': (2, 3, 4, 5)}, 'anchors': {'width': [(6.0, 4.0, 26.0), (12.0, 8.0, 52.0), (24.0, 16.0, 104.0), (48.0, 32.0, 208.0)], 'height': [(8.0, 11.0, 15.0), (16.0, 22.0, 30.0), (32.0, 44.0, 60.0), (64.0, 88.0, 120.0)], 'depth': [(8.0, 11.0, 15.0), (16.0, 22.0, 30.0), (32.0, 44.0, 60.0), (64.0, 88.0, 120.0)], 'stride': 1}, 'target_spacing_transposed': array([2. , 0.8125, 0.8125]), 'median_shape_transposed': array([311.875, 512. , 512. ]), 'do_dummy_2D_data_aug': False, 'trigger_lr1': True}
    the anchors depth seems strange as the three values are not increasing ... is that a choice from the optimiser that is normal or might it be an error that occured somewhere ?

Thank you in advance
Best regards

[Question] Understanding a poor training on the ADAM dataset

Hi! Awesome work :)

Recently we have trained the nnDetection on the ADAM challenge, i.e., Task019FG_ADAM.
However, the predictions on the test set are pretty bad - a lot of false postives and general sensitivity (approaching) 0. We are trying to understand where it went wrong, maybe you could be of help.

  1. In your case, did the network generate a low resolution model for the ADAM challenge? Our network did end up generating a low resolution model, which we did not specifically use further on.

  2. Do you have any suggestions on what could be different with your run?

The input data was unchanged apart from the omission of one patient due to having a T1 image, and we did not deviate from the instruction steps. We trained all five folds and performed a sweep for all. After that we ran the consolidation and prediction arguments as instructed.

Thank you for your help!

Best,
Aaron


Environment Information

Currently using an NVIDIA GeForce RTX 2080 Ti; PyTorch 1.8.0; CUDA 11.2.
nnDetection was installed from [docker | source].

PyTorch Version: <module 'torch.version' from '/opt/conda/lib/python3.8/site-packages/torch/version.py'>
PyTorch CUDA: 11.2
PyTorch Backend cudnn: 8100
PyTorch CUDA Arch List: ['sm_52', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'compute_86']
PyTorch Current Device Capability: (7, 5)
PyTorch CUDA available: True
System NVCC: nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Nov_30_19:08:53_PST_2020
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0

System Arch List: 7.5
System OMP_NUM_THREADS: 1
System CUDA_HOME is None: False
Python Version: 3.8.5 (default, Sep  4 2020, 07:30:14) 
[GCC 7.3.0]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.