Giter Club home page Giter Club logo

idhmut's Introduction

IDHmut

Example workflow to predict IDH mutation status from histopathological whole slide images (WSI) using deep learning.

Liechty, B., Xu, Z., Zhang, Z. et al. Machine learning can aid in prediction of IDH mutation from H&E-stained histology slides in infiltrating gliomas. Sci Rep 12, 22623 (2022). https://doi.org/10.1038/s41598-022-26170-6

1. Tiling Slides

The first step is to tile the slides into patches and store them into specified folder. A csv file with two columns ('SVS_Path','PatientID') is required to store svs file paths and IDs. See csv folder for example file for --df_path argument.

User has to assign two arguments: --df_path and --target_path

Basic command:

python3 Tiling.py --df_path '/path/to/csv/tilingslides.csv' --target_path '/root/folder/path/for/tiles'

Some optional arguments:

'--workers',type=int,default=8

'--tilesize',type=int,default=256

'--stride',type=int,default=256 (when set to a number smaller than tilesize, tiles will have overlaps)

'--tissuepct_value',type=int,default=0.7 (for quality control)

'--magnification',type=str,default='multi' (by default 2.5x, 5x, 10x, 20x will all be tiled, otherwise assign one target magnification). Note that for training, tiles from a specific magnification will be used. It is recommended to examine the validation performance using different magnifications to choose the optimal one.

Example: (will tile 2.5x patches with tissue percentage over 50%)

python3 Tiling.py --df_path '/path/to/csv/file.csv' --target_path '/root/folder/path/for/tiles' --tissuepct_value 0.5 --magnification '2.5x'

2. Model Training

After tiling the slides, we can start training models.

Prepare a csv file for --df_path is required. Example can be found in csv folder with name of training.csv. Basically, three columns are required: 1. label column, use --y_col to assign label name; 2. 'Path': paths for the tiles generated from each slide in the previous step; 3. 'Train_Test': containing values of 'Train'/'Validation'/'Test'. Train and Validation are required.

Some important arguments:

'--result_dir', type=str': folder path for saving the model

'--df_path': path to the training meta data

'--gpu': at least two gpus are required, default is '0,1,2,3'

'--patch_n': patch number to sample for each slide during each iteration

'--balance': weights for balancing loss function for each class; default is 0.5. If weighting loss function for class imbalancement, --balance_training_off is suggested

'--balance_training': used to balance class imbalancement during training. Will automatically sample the same number of samples for each class. Suggest leaving --balance as default

'--CNN' choose from resnet and densenet

'--y_col' label name in the csv file. Values must be numeric: 0 or 1

'--freeze_batchnorm' suggest to set this for more stable results

'--pooling': select from attention, max and mean for aggregating embedding layers of tiles from one slide

'--A' if set pooling as attention, set a number to A for number of nodes in attention network. Default is 16.

Example Codes:

python3 Train.py --result_dir ‘/path/for/model’ --df_path ‘/path/to/training.csv’ --workers 16 --CNN densenet --no_age --patch_n 200 --spatial_sample_off --n_epoch 100 --lr 0.00001 --optimizer Adam --use_scheduler --balance 0.5 --balance_training --freeze_batchnorm --pooling mean --notes model0

3. Model Evaluation

python3 code/Evaluation.py --df_path 'path/to/dataframe/' --y_col='IDH' --Model_Folder 'path/to/your/trained/model' --key_word 'Test' --no_age --two_forward_off

Instructions for using GPU and computational clusters

1. Create a conda environemnt:

conda env create N where N is the name of your environment

2. Activate the enviroment:

conda activate N where N is the name of your conda environment

2. Check the cuda version compatible with the driver

nvidia-smi

3. Install the correct cuda and pytorch:

conda install pytorch torchvision cudatoolkit=11.0 -c pytorch

4. Download and install cuDNN by following these instructions

5. Untar the downloaded cuDNN folder then move its contents to the conda library:

cp cuda/include/cudnn*.h /path/to/your/environment/include cp cuda/lib64/libcudnn* /path/to/your/environment/lib

6. Example submission script for training using 2 GPUs on SLURM:

#! /bin/bash -l
#SBATCH --partition=panda-gpu
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=15
#SBATCH --job-name=training
#SBATCH --time=1-00:00:00
#SBATCH --mem=64G
#SBATCH --gres=gpu:2

source ~/.bashrc

conda activate /path/to/your/environment

python3 code/Train.py --result_dir 'data/model/' --df_path 'MetaData_training.csv' --workers 15 --CNN densenet --no_age --y_col 'Gleason_HighLow' --patch_n 200 --spatial_sample_off --n_epoch 100 --lr 0.00001 --optimizer Adam --use_scheduler --balance 0.5 --balance_training --freeze_batchnorm --pooling mean --notes model0 --gpu 2

idhmut's People

Contributors

karenxzr avatar mohamedomar2020 avatar

Stargazers

Zhengrui Guo avatar  avatar Pranav Swaroop Gundla avatar Deep_learner avatar  avatar Nirmala avatar Jacob Rosenthal avatar  avatar Konrad Stawiski MD PhD avatar  avatar

Watchers

Cagla Deniz Bahadir avatar Kostas Georgiou avatar  avatar  avatar

idhmut's Issues

Issue with training

Thanks for making this code available.

I'm running into a problem when trying to run the training module.

I run this code:

python Train.py --result_dir data/output/train_output/ --df_path csv/training.csv --workers 16 --CNN densenet --no_age --patch_n 200 --spatial_sample_off --n_epoch 100 --lr 0.00001 --optimizer Adam --use_scheduler --balance 0.5 --balance_training --freeze_batchnorm --pooling mean --notes model0 --y_col 'TET2' --gpu '1,2'

and catch this error:

the result dir is:  data/output/train_output/2023-04-25 16:31:03
Building Model and Optimizer
~/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  warnings.warn(
~/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=DenseNet121_Weights.IMAGENET1K_V1`. You can also use `weights=DenseNet121_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Balance Training Weighted Sampler Used
          0
0  0.000026
1  0.000136
Start Training:
Epoch1starts:
Traceback (most recent call last):
  File "/mnt/data08/shared/skrichevsky/MutationPrediction/IDHmut/Train.py", line 374, in <module>
    main()
  File "/mnt/data08/shared/skrichevsky/MutationPrediction/IDHmut/Train.py", line 182, in main
    loss,acc= train(epoch,model0=CNN_model,model1=attention_model,
  File "/mnt/data08/shared/skrichevsky/MutationPrediction/IDHmut/Train.py", line 263, in train
    for batch_idx, (data, label) in enumerate(train_loader):
  File "~/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
    data = self._next_data()
  File "~/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
    return self._process_data(data)
  File "~/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
    data.reraise()
  File "~/lib/python3.10/site-packages/torch/_utils.py", line 461, in reraise
    raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "~/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "~/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "~/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/mnt/data08/shared/skrichevsky/MutationPrediction/IDHmut/Model/DataLoader_torch.py", line 43, in __getitem__
    input_array = Morph_Augmentor.Augmentation_from_Folder(df.loc[index,'Path'],target_w=self.target_w, target_h=self.target_h,
  File "/mnt/data08/shared/skrichevsky/MutationPrediction/IDHmut/Preprocess/Morph_Augmentor.py", line 26, in Augmentation_from_Folder
    path = random.choices(path, k=samples)
  File "~/lib/python3.10/random.py", line 519, in choices
    return [population[floor(random() * n)] for i in _repeat(None, k)]
  File "~/lib/python3.10/random.py", line 519, in <listcomp>
    return [population[floor(random() * n)] for i in _repeat(None, k)]
IndexError: list index out of range

I've tried forcing samples=1 in the definition of Augmentation_from_Folder to allow sample paths to be extracted but that didn't help.

def Augmentation_from_Folder(FolderPath, target_w,target_h, p_flip=0.5, p_rotate=0.5,
                             samples = 1, sigma=0.05,ColorAugmentation = True,spatial_sample=False,KeepPath=False):

This is my environment:

defaults/linux-64::_libgcc_mutex-0.1-main
defaults/linux-64::_openmp_mutex-5.1-1_gnu
pypi/pypi::absl-py-1.3.0-pypi_0
pypi/pypi::alabaster-0.7.12-pypi_0
pypi/pypi::albumentations-1.3.0-pypi_0
pypi/pypi::anyio-3.6.2-pypi_0
pypi/pypi::argon2-cffi-21.3.0-pypi_0
pypi/pypi::argon2-cffi-bindings-21.2.0-pypi_0
pypi/pypi::asciitree-0.3.3-pypi_0
pypi/pypi::asttokens-2.0.8-pypi_0
pypi/pypi::astunparse-1.6.3-pypi_0
pypi/pypi::attrs-22.1.0-pypi_0
pypi/pypi::babel-2.10.3-pypi_0
pypi/pypi::backcall-0.2.0-pypi_0
pypi/pypi::beautifulsoup4-4.11.1-pypi_0
defaults/linux-64::blas-1.0-mkl
pypi/pypi::bleach-5.0.1-pypi_0
pypi/pypi::brotlipy-0.7.0-pypi_0
defaults/linux-64::bzip2-1.0.8-h7b6447c_0
defaults/linux-64::ca-certificates-2023.01.10-h06a4308_0
pypi/pypi::cachetools-5.2.0-pypi_0
defaults/linux-64::cairo-1.16.0-h19f5f5c_2
pypi/pypi::certifi-2022.12.7-pypi_0
pypi/pypi::cffi-1.15.1-pypi_0
pypi/pypi::charset-normalizer-2.1.1-pypi_0
pypi/pypi::click-8.1.3-pypi_0
pypi/pypi::contourpy-1.0.5-pypi_0
pypi/pypi::cryptography-38.0.1-pypi_0
nvidia/linux-64::cuda-cudart-11.7.99-0
nvidia/linux-64::cuda-cupti-11.7.101-0
nvidia/linux-64::cuda-libraries-11.7.1-0
nvidia/linux-64::cuda-nvrtc-11.7.99-0
nvidia/linux-64::cuda-nvtx-11.7.91-0
nvidia/linux-64::cuda-runtime-11.7.1-0
defaults/linux-64::cudatoolkit-11.3.1-h2bc3f7f_2
pypi/pypi::cycler-0.11.0-pypi_0
pypi/pypi::debugpy-1.6.3-pypi_0
pypi/pypi::decorator-5.1.1-pypi_0
pypi/pypi::defusedxml-0.7.1-pypi_0
pypi/pypi::dicom-0.9.9.post1-pypi_0
pypi/pypi::docutils-0.19-pypi_0
pypi/pypi::entrypoints-0.4-pypi_0
pypi/pypi::executing-1.1.1-pypi_0
pypi/pypi::fasteners-0.18-pypi_0
pypi/pypi::fastjsonschema-2.16.2-pypi_0
pytorch/linux-64::ffmpeg-4.3-hf484d3e_0
defaults/linux-64::fftw-3.3.9-h27cfd23_1
pypi/pypi::flask-2.2.2-pypi_0
pypi/pypi::flatbuffers-22.10.26-pypi_0
defaults/noarch::flit-core-3.6.0-pyhd3eb1b0_0
defaults/linux-64::fontconfig-2.13.1-h6c09931_0
pypi/pypi::fonttools-4.38.0-pypi_0
defaults/linux-64::freetype-2.11.0-h70c0345_0
pypi/pypi::future-0.18.2-pypi_0
pypi/pypi::gast-0.4.0-pypi_0
defaults/linux-64::gdk-pixbuf-2.42.8-h433bba3_0
defaults/linux-64::giflib-5.2.1-h5eee18b_3
defaults/linux-64::glib-2.69.1-h4ff587b_1
pypi/pypi::glymur-0.11.7-pypi_0
defaults/linux-64::gmp-6.2.1-h295c915_3
defaults/linux-64::gnutls-3.6.15-he1e5248_0
pypi/pypi::google-auth-2.14.0-pypi_0
pypi/pypi::google-auth-oauthlib-0.4.6-pypi_0
pypi/pypi::google-pasta-0.2.0-pypi_0
pypi/pypi::grpcio-1.50.0-pypi_0
pypi/pypi::h5py-3.7.0-pypi_0
defaults/linux-64::icu-58.2-he6710b0_3
pypi/pypi::idna-3.4-pypi_0
pypi/pypi::imagecodecs-2022.9.26-pypi_0
pypi/pypi::imageio-2.22.2-pypi_0
pypi/pypi::imagesize-1.4.1-pypi_0
defaults/linux-64::intel-openmp-2021.4.0-h06a4308_3561
pypi/pypi::ipykernel-6.16.1-pypi_0
pypi/pypi::ipython-8.5.0-pypi_0
pypi/pypi::ipython-genutils-0.2.0-pypi_0
pypi/pypi::itsdangerous-2.1.2-pypi_0
pypi/pypi::jedi-0.18.1-pypi_0
pypi/pypi::jinja2-3.1.2-pypi_0
pypi/pypi::joblib-1.1.1-pypi_0
defaults/linux-64::jpeg-9e-h7f8727e_0
pypi/pypi::json5-0.9.10-pypi_0
pypi/pypi::jsonschema-4.16.0-pypi_0
pypi/pypi::jupyter-client-7.4.3-pypi_0
pypi/pypi::jupyter-core-4.11.2-pypi_0
pypi/pypi::jupyter-server-1.21.0-pypi_0
pypi/pypi::jupyterlab-3.5.0-pypi_0
pypi/pypi::jupyterlab-pygments-0.2.2-pypi_0
pypi/pypi::jupyterlab-server-2.16.1-pypi_0
pypi/pypi::keras-2.10.0-pypi_0
pypi/pypi::keras-preprocessing-1.1.2-pypi_0
pypi/pypi::kiwisolver-1.4.4-pypi_0
defaults/linux-64::lame-3.100-h7b6447c_0
pypi/pypi::lazy-loader-0.2-pypi_0
defaults/linux-64::lcms2-2.12-h3be6417_0
defaults/linux-64::ld_impl_linux-64-2.38-h1181459_1
defaults/linux-64::lerc-3.0-h295c915_0
pypi/pypi::libclang-14.0.6-pypi_0
nvidia/linux-64::libcublas-11.10.3.66-0
nvidia/linux-64::libcufft-10.7.2.124-h4fbf590_0
nvidia/linux-64::libcufile-1.6.1.9-0
nvidia/linux-64::libcurand-10.3.2.106-0
nvidia/linux-64::libcusolver-11.4.0.1-0
nvidia/linux-64::libcusparse-11.7.4.91-0
defaults/linux-64::libdeflate-1.8-h7f8727e_5
defaults/linux-64::libffi-3.3-he6710b0_2
defaults/linux-64::libgcc-7.2.0-h69d50b8_2
defaults/linux-64::libgcc-ng-11.2.0-h1234567_1
defaults/linux-64::libgfortran-ng-11.2.0-h00389a5_1
defaults/linux-64::libgfortran5-11.2.0-h1234567_1
defaults/linux-64::libgomp-11.2.0-h1234567_1
conda-forge/linux-64::libiconv-1.17-h166bdaf_0
defaults/linux-64::libidn2-2.3.2-h7f8727e_0
nvidia/linux-64::libnpp-11.7.4.75-0
nvidia/linux-64::libnvjpeg-11.8.0.2-0
defaults/linux-64::libpng-1.6.37-hbc83047_0
defaults/linux-64::libstdcxx-ng-11.2.0-h1234567_1
defaults/linux-64::libtasn1-4.19.0-h5eee18b_0
defaults/linux-64::libtiff-4.4.0-hecacb30_0
defaults/linux-64::libunistring-0.9.10-h27cfd23_0
defaults/linux-64::libuuid-1.0.3-h7f8727e_2
defaults/linux-64::libwebp-1.2.4-h11a3e52_1
defaults/linux-64::libwebp-base-1.2.4-h5eee18b_0
defaults/linux-64::libxcb-1.15-h7f8727e_0
defaults/linux-64::libxml2-2.9.14-h74e7548_0
pypi/pypi::llvmlite-0.39.1-pypi_0
pypi/pypi::lxml-4.9.1-pypi_0
defaults/linux-64::lz4-c-1.9.3-h295c915_1
pypi/pypi::markdown-3.4.1-pypi_0
pypi/pypi::markupsafe-2.1.1-pypi_0
pypi/pypi::matplotlib-3.6.1-pypi_0
pypi/pypi::matplotlib-inline-0.1.6-pypi_0
pypi/pypi::mistune-2.0.4-pypi_0
defaults/linux-64::mkl-2021.4.0-h06a4308_640
pypi/pypi::mkl-fft-1.3.1-pypi_0
pypi/pypi::mkl-random-1.2.2-pypi_0
pypi/pypi::mkl-service-2.4.0-pypi_0
defaults/linux-64::mkl_fft-1.3.1-py310hd6ae3a3_0
defaults/linux-64::mkl_random-1.2.2-py310h00e6091_0
pypi/pypi::nbclassic-0.4.5-pypi_0
pypi/pypi::nbclient-0.7.0-pypi_0
pypi/pypi::nbconvert-7.2.2-pypi_0
pypi/pypi::nbformat-5.7.0-pypi_0
defaults/linux-64::ncurses-6.3-h5eee18b_3
pypi/pypi::nest-asyncio-1.5.6-pypi_0
pypi/pypi::nets-0.0.3.1-pypi_0
defaults/linux-64::nettle-3.7.3-hbbd107a_1
pypi/pypi::networkx-2.8.7-pypi_0
defaults/linux-64::ninja-1.10.2-h06a4308_5
defaults/linux-64::ninja-base-1.10.2-hd09550d_5
pypi/pypi::notebook-6.5.1-pypi_0
pypi/pypi::notebook-shim-0.2.0-pypi_0
pypi/pypi::numba-0.56.3-pypi_0
pypi/pypi::numcodecs-0.10.2-pypi_0
pypi/pypi::numpy-1.23.5-pypi_0
defaults/linux-64::numpy-base-1.23.3-py310h8e6c178_0
pypi/pypi::nvidia-cublas-cu11-11.10.3.66-pypi_0
pypi/pypi::nvidia-cuda-nvrtc-cu11-11.7.99-pypi_0
pypi/pypi::nvidia-cuda-runtime-cu11-11.7.99-pypi_0
pypi/pypi::nvidia-cudnn-cu11-8.5.0.96-pypi_0
pypi/pypi::oauthlib-3.2.2-pypi_0
pypi/pypi::opencv-python-4.6.0.66-pypi_0
pypi/pypi::opencv-python-headless-4.6.0.66-pypi_0
defaults/linux-64::openh264-2.1.1-h4ff587b_0
defaults/linux-64::openjpeg-2.4.0-h3ad879b_0
bioconda/linux-64::openslide-3.4.1-2
pypi/pypi::openslide-python-1.2.0-pypi_0
defaults/linux-64::openssl-1.1.1t-h7f8727e_0
pypi/pypi::opt-einsum-3.3.0-pypi_0
pypi/pypi::packaging-21.3-pypi_0
pypi/pypi::pandas-1.5.1-pypi_0
pypi/pypi::pandocfilters-1.5.0-pypi_0
pypi/pypi::parso-0.8.3-pypi_0
defaults/linux-64::pcre-8.45-h295c915_0
pypi/pypi::pexpect-4.8.0-pypi_0
pypi/pypi::pickleshare-0.7.5-pypi_0
pypi/pypi::pillow-9.2.0-pypi_0
pypi/pypi::pip-22.2.2-pypi_0
defaults/linux-64::pixman-0.40.0-h7f8727e_1
pypi/pypi::prometheus-client-0.15.0-pypi_0
pypi/pypi::prompt-toolkit-3.0.31-pypi_0
pypi/pypi::protobuf-3.19.6-pypi_0
pypi/pypi::psutil-5.9.0-pypi_0
pypi/pypi::ptyprocess-0.7.0-pypi_0
pypi/pypi::pure-eval-0.2.2-pypi_0
pypi/pypi::pyasn1-0.4.8-pypi_0
pypi/pypi::pyasn1-modules-0.2.8-pypi_0
defaults/noarch::pycparser-2.21-pyhd3eb1b0_0
pypi/pypi::pydicom-2.3.1-pypi_0
pyg/linux-64::pyg-2.2.0-py310_torch_1.12.0_cu113
pypi/pypi::pygments-2.13.0-pypi_0
pypi/pypi::pynndescent-0.5.7-pypi_0
defaults/noarch::pyopenssl-22.0.0-pyhd3eb1b0_0
pypi/pypi::pyparsing-3.0.9-pypi_0
pypi/pypi::pyrsistent-0.18.1-pypi_0
pypi/pypi::pysocks-1.7.1-pypi_0
defaults/linux-64::python-3.10.6-haa1d7c7_1
pypi/pypi::python-dateutil-2.8.2-pypi_0
pytorch/linux-64::pytorch-1.12.1-py3.10_cuda11.3_cudnn8.3.2_0
pyg/linux-64::pytorch-cluster-1.6.0-py310_torch_1.12.0_cu113
pytorch/linux-64::pytorch-cuda-11.7-h778d358_3
pytorch/noarch::pytorch-mutex-1.0-cuda
pyg/linux-64::pytorch-scatter-2.1.0-py310_torch_1.12.0_cu113
pyg/linux-64::pytorch-sparse-0.6.16-py310_torch_1.12.0_cu113
pypi/pypi::pytz-2022.5-pypi_0
pypi/pypi::pywavelets-1.4.1-pypi_0
pypi/pypi::pyyaml-6.0-pypi_0
pypi/pypi::pyzmq-24.0.1-pypi_0
pypi/pypi::qudida-0.0.4-pypi_0
defaults/linux-64::readline-8.1.2-h7f8727e_1
pypi/pypi::requests-2.28.1-pypi_0
pypi/pypi::requests-oauthlib-1.3.1-pypi_0
pypi/pypi::rsa-4.9-pypi_0
pypi/pypi::scikit-image-0.20.0-pypi_0
pypi/pypi::scikit-learn-1.1.3-pypi_0
pypi/pypi::scipy-1.9.3-pypi_0
pypi/pypi::send2trash-1.8.0-pypi_0
pypi/pypi::setuptools-63.4.1-pypi_0
pypi/pypi::shapely-1.8.5.post1-pypi_0
pypi/pypi::simpleitk-2.2.0-pypi_0
defaults/noarch::six-1.16.0-pyhd3eb1b0_1
pypi/pypi::sniffio-1.3.0-pypi_0
pypi/pypi::snowballstemmer-2.2.0-pypi_0
pypi/pypi::soupsieve-2.3.2.post1-pypi_0
pypi/pypi::sphinx-5.3.0-pypi_0
pypi/pypi::sphinxcontrib-applehelp-1.0.2-pypi_0
pypi/pypi::sphinxcontrib-devhelp-1.0.2-pypi_0
pypi/pypi::sphinxcontrib-htmlhelp-2.0.0-pypi_0
pypi/pypi::sphinxcontrib-jsmath-1.0.1-pypi_0
pypi/pypi::sphinxcontrib-qthelp-1.0.3-pypi_0
pypi/pypi::sphinxcontrib-serializinghtml-1.1.5-pypi_0
defaults/linux-64::sqlite-3.39.3-h5082296_0
pypi/pypi::stack-data-0.5.1-pypi_0
pypi/pypi::tensorboard-2.10.1-pypi_0
pypi/pypi::tensorboard-data-server-0.6.1-pypi_0
pypi/pypi::tensorboard-plugin-wit-1.8.1-pypi_0
pypi/pypi::tensorflow-2.10.0-pypi_0
pypi/pypi::tensorflow-estimator-2.10.0-pypi_0
pypi/pypi::tensorflow-io-gcs-filesystem-0.27.0-pypi_0
pypi/pypi::termcolor-2.1.0-pypi_0
pypi/pypi::terminado-0.16.0-pypi_0
pypi/pypi::tf-slim-1.1.0-pypi_0
pypi/pypi::threadpoolctl-3.1.0-pypi_0
pypi/pypi::tiatoolbox-1.3.1-pypi_0
pypi/pypi::tifffile-2022.10.10-pypi_0
pypi/pypi::tinycss2-1.2.1-pypi_0
defaults/linux-64::tk-8.6.12-h1ccaba5_0
pypi/pypi::tomli-2.0.1-pypi_0
pypi/pypi::torch-1.13.1-pypi_0
pypi/pypi::torch-geometric-2.2.0-pypi_0
pypi/pypi::torch-scatter-2.1.0+pt113cu117-pypi_0
pypi/pypi::torch-sparse-0.6.16+pt113cu117-pypi_0
pypi/pypi::torchaudio-0.12.1-pypi_0
pypi/pypi::torchvision-0.14.1-pypi_0
pypi/pypi::tornado-6.2-pypi_0
pypi/pypi::tqdm-4.64.1-pypi_0
pypi/pypi::traitlets-5.5.0-pypi_0
pypi/pypi::typing-extensions-4.4.0-pypi_0
defaults/linux-64::typing_extensions-4.4.0-py310h06a4308_0
defaults/noarch::tzdata-2022e-h04d1e81_0
pypi/pypi::ujson-5.5.0-pypi_0
pypi/pypi::umap-learn-0.5.3-pypi_0
pypi/pypi::urllib3-1.26.13-pypi_0
pypi/pypi::wcwidth-0.2.5-pypi_0
pypi/pypi::webencodings-0.5.1-pypi_0
pypi/pypi::websocket-client-1.4.1-pypi_0
pypi/pypi::werkzeug-2.2.2-pypi_0
defaults/noarch::wheel-0.37.1-pyhd3eb1b0_0
pypi/pypi::wrapt-1.14.1-pypi_0
pypi/pypi::wsidicom-0.7.0-pypi_0
defaults/linux-64::xz-5.2.6-h5eee18b_0
defaults/linux-64::yaml-0.2.5-h7b6447c_0
pypi/pypi::zarr-2.13.3-pypi_0
defaults/linux-64::zlib-1.2.13-h5eee18b_0
defaults/linux-64::zstd-1.5.2-ha4553b6_0

Please let me know if you have any thoughts on how to resolve this.

Exclusion of Data

Hi,
This is excellent work and great use-case.
As we are aware that there are around 1100 patients in GBM+LGG collection, and 879 patients have WSI and 801 slides corresponding have IDH info available. As in the paper, it is mentioned that 379 patient data is used. Any Other Molecular criteria was used to short list this 379 slides?

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.