Giter Club home page Giter Club logo

autopet's People

Contributors

b4shy avatar sergiosgatidis avatar thomaskuestner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

autopet's Issues

About the dataset

Hi,I have some questions about the FDG-PET-CT-lesions.I want to use it,but I have emailed it for a week,there is no answer.So I want to know how long did you spand for applying the dataset.

LFS bandwidth

Hi when I try to use git lfs download the weights, the error "batch response: This repository is over its data quota. Account responsible forLFS bandwidth should purchase more data packs to restore access.
Failed to fetch some objects from 'https://github.com/lab-midas/autoPET.git/inf/lfs'
" was raised. Are there any other links to download the weight

Clone error

I'm having trouble cloning the project.

Cloning into 'C:\Users\11745\Documents\GitHub\autoPET'...
remote: Enumerating objects: 227, done.
remote: Counting objects: 100% (32/32), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 227 (delta 4), reused 10 (delta 1), pack-reused 195
Receiving objects: 100% (227/227), 130.38 MiB | 864.00 KiB/s, done.
Resolving deltas: 100% (102/102), done.
Downloading nnUNet_baseline/weights.zip (233 MB)
Error downloading object: nnUNet_baseline/weights.zip (03c3736): Smudge error: Error downloading nnUNet_baseline/weights.zip (03c37363614349f688f233963bd93dee392431f51408e7c844ad31a8aa0298f6): batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.

Errors logged to 'C:\Users\11745\Documents\GitHub\autoPET.git\lfs\logs\20220523T144625.2955277.log'.
Use git lfs logs last to view the log.
error: external filter 'git-lfs filter-process' failed
fatal: nnUNet_baseline/weights.zip: smudge filter lfs failed
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'

PermissionError: [Errno 13] Permission denied: '/input/images/ct/'

Hi there,

thanks for the great challenge! I am having some trouble with getting the baseline U-Net Docker container to run. I have modified nothing and am just running the following command:

./test.sh

from the autoPET/uNet_baseline directory. I get the following error:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/algorithm/process.py", line 95, in <module>
    Unet_baseline().process()
  File "/opt/algorithm/process.py", line 87, in process
    uuid = self.load_inputs()
  File "/opt/algorithm/process.py", line 55, in load_inputs
    ct_mha = os.listdir(os.path.join(self.input_path, 'images/ct/'))[0]
PermissionError: [Errno 13] Permission denied: '/input/images/ct/'

It is a bit odd that os.listdir() is throwing a PermissionError. What could this be caused by? I am somewhat new to Docker, so this could be a very obvious error, but I would appreciate any help!

Issues in process.py file

Could you explain what should the below paths include and which files we have to store at these paths to run the code? Do they refer to the ct and pet tar files?

in load_inputs(self):

ct_mha = os.listdir(os.path.join(self.input_path, 'images/ct/'))[0]
pet_mha = os.listdir(os.path.join(self.input_path, 'images/pet/'))[0]
uuid = os.path.splitext(ct_mha)[0]

self.convert_mha_to_nii(os.path.join(self.input_path, 'images/pet/', pet_mha),
os.path.join(self.nii_path, 'TCIA_001_0000.nii.gz'))
self.convert_mha_to_nii(os.path.join(self.input_path, 'images/ct/', ct_mha),
os.path.join(self.nii_path, 'TCIA_001_0001.nii.gz'))

Unpickling error while running run_inference function

starting

UnpicklingError Traceback (most recent call last)
in ()
138
139 if name == 'main':
--> 140 run_inference()
141

5 frames
/usr/local/lib/python3.7/dist-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
918 "functionality.")
919
--> 920 magic_number = pickle_module.load(f, **pickle_load_args)
921 if magic_number != MAGIC_NUMBER:
922 raise RuntimeError("Invalid magic number; corrupt file?")

UnpicklingError: invalid load key, 'v'.

annotation protocol

Dear organizers,

grand-challenge forum cannot directly upload images, so I post the question here

I found the following annotation protocol in the challenge document

  • Step 1: Identification of FDG-avid tumor lesions by visual assessment of PET and CT information together with the clinical examination reports.
  • Step 2: Manual free-hand segmentation of identified lesions in axial slices

Would it be possible to provide more details on the annotation protocol? E.g., how the radiologists identify the lesions from PET and CT, and how do they avoid FP and FN based on the images?

I go through the training images and it seems that most lesions have high intensity in PET.
However, the case PETCT_1b199d094d-0.nii.gz does not match the feature. Is this a label error?

WeChat Image_20220822221559

Unable to determine ImageIO reader for "/input/images/pet/e260efef-0a29-4c68-972e-9e573c740de5.mha"

Dear organizers,

I tested the nnunet baseline but got the following error

Checking GPU availability
Available: True
Device count: 2
Current device: 0
Device name: NVIDIA GeForce RTX 2080 Ti
Device memory: 11554848768
Start processing
Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/algorithm/process.py", line 202, in <module>
    Autopet_baseline().process()
  File "/opt/algorithm/process.py", line 193, in process
    uuid = self.load_inputs()
  File "/opt/algorithm/process.py", line 63, in load_inputs
    os.path.join(self.nii_path, 'TCIA_001_0000.nii.gz'))
  File "/opt/algorithm/process.py", line 33, in convert_mha_to_nii
    img = SimpleITK.ReadImage(mha_input_path)
  File "/home/algorithm/.local/lib/python3.7/site-packages/SimpleITK/extra.py", line 346, in ReadImage
    return reader.Execute()
  File "/home/algorithm/.local/lib/python3.7/site-packages/SimpleITK/SimpleITK.py", line 8015, in Execute
    return _SimpleITK.ImageFileReader_Execute(self)
RuntimeError: Exception thrown in SimpleITK ImageFileReader_Execute: /tmp/SimpleITK/Code/IO/src/sitkImageReaderBase.cxx:105:
sitk::ERROR: Unable to determine ImageIO reader for "/input/images/pet/e260efef-0a29-4c68-972e-9e573c740de5.mha"

This is my folder structure

├── data_conversion
│   ├── mha2nii.py
│   ├── nii2mha.py
│   ├── tcia2hdf5.py
│   └── tcia2nifti.py
├── LICENSE
├── nnUNet_baseline
│   ├── build.sh
│   ├── checkpoints
│   │   └── nnUNet
│   │       └── 3d_fullres
│   │           └── Task001_TCIA
│   │               └── nnUNetTrainerV2__nnUNetPlansv2.1
│   │                   ├── fold_0
│   │                   │   ├── debug.json
│   │                   │   ├── model_final_checkpoint.model
│   │                   │   ├── model_final_checkpoint.model.pkl
│   │                   │   └── progress.png
│   │                   └── plans.pkl
│   ├── Dockerfile
│   ├── Dockerfile.eval
│   ├── export.sh
│   ├── predict.py
│   ├── process.py
│   ├── README.md
│   ├── requirements.txt
│   └── test.sh
├── README.md
├── test
│   ├── expected_output_nnUNet
│   │   └── images
│   │       └── TCIA_001.nii.gz
│   ├── expected_output_uNet
│   │   └── PRED.nii.gz
│   └── input
│       └── images
│           ├── ct
│           │   └── af3b6605-c2b9-4067-8af5-8b85aafb2ae3.mha
│           └── pet
│               └── e260efef-0a29-4c68-972e-9e573c740de5.mha

and dockerfile

FROM pytorch/pytorch


RUN groupadd -r algorithm && useradd -m --no-log-init -r -g algorithm algorithm

RUN mkdir -p /opt/algorithm /input /output \
    && chown algorithm:algorithm /opt/algorithm /input /output

USER algorithm

WORKDIR /opt/algorithm

ENV PATH="/home/algorithm/.local/bin:${PATH}"

RUN python -m pip install --user -U pip

COPY --chown=algorithm:algorithm requirements.txt /opt/algorithm/
RUN python -m pip install --user torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
RUN python -m pip install --user -r requirements.txt

COPY --chown=algorithm:algorithm process.py /opt/algorithm/
COPY --chown=algorithm:algorithm predict.py /opt/algorithm/

# RUN mkdir -p /opt/algorithm/checkpoints/nnUNet/

# Store your weights in the container
COPY --chown=algorithm:algorithm checkpoints /opt/algorithm/

# nnUNet specific setup
RUN mkdir -p /opt/algorithm/nnUNet_raw_data_base/nnUNet_raw_data/Task001_TCIA/imagesTs
RUN mkdir -p /opt/algorithm/nnUNet_raw_data_base/nnUNet_raw_data/Task001_TCIA/result

ENV nnUNet_raw_data_base="/opt/algorithm/nnUNet_raw_data_base"
ENV RESULTS_FOLDER="/opt/algorithm/checkpoints"
ENV MKL_SERVICE_FORCE_INTEL=1


ENTRYPOINT python -m process $0 $@


Any comments would be highly appreciated:)

shared memory size at docker run time

Hi there,
how large is the shared memory size going to be when the dockers are evaluated on challenge.org? As it currently stands, my model refuses to run using the run command from test.sh

docker run -it --rm \
        --memory="${MEM_LIMIT}" \
        --memory-swap="${MEM_LIMIT}" \
        --network="none" \
        --cap-drop="ALL" \
        --security-opt="no-new-privileges" \
        --shm-size="128m" \
        --pids-limit="256" \
        --gpus="all" \
        -v $SCRIPTPATH/test/input/:/input/ \
        -v autopet_baseline-output-$VOLUME_SUFFIX:/output/ \
        autopet_baseline

The reason for this is --shm-size="128m" \. Increasing that to --shm-size="1g" \ fixes the problem. What will this setting be when we submit it to the leaderboard?
Best,
Fabian

Additional information on unet baseline

How long to train a Unet on 800 epochs with your hyperparameters?
How many samples per volume do you use at each epoch?
Do you use a scheduler?

Best,
Hugo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.