Giter Club home page Giter Club logo

pathflowai's Introduction

Welcome to PathFlowAI

Version Documentation

A Convenient High-Throughput Workflow for Preprocessing, Deep Learning Analytics and Interpretation in Digital Pathology

🏠 Homepage

Published in the Proceedings of the Pacific Symposium for Biocomputing 2020, Manuscript: https://psb.stanford.edu/psb-online/proceedings/psb20/Levy.pdf

Install

First, install openslide. Note: may need to install libiconv and shapely using conda. Will update with more installation information, please submit issues as well.

pip install pathflowai
install_apex

Usage

pathflowai-preprocess -h
pathflowai-train_model -h
pathflowai-monitor -h
pathflowai-visualize -h

See Wiki for more information on setting up and running the workflow. Please submit feedback as issues and let me know if there is any trouble with installation and I am more than happy to provide advice and fixes.

Author

👤 Joshua Levy

🤝 Contributing

Contributions, issues and feature requests are welcome!
Feel free to check issues page.

Figures from the Paper

1

Fig. 1. PathFlowAI Framework: a) Annotations and whole slide images are preprocessed in parallel using Dask; b) Deep learning prediction model is trained on the model; c) Results are visualized; d) UMAP embeddings provide diagnostics; e) SHAP framework is used to find important regions for the prediction

2

Fig. 2. Comparison of PathFlowAI to Preprocessing WSI in Series for: a) Preprocessing time, b) Storage Space, c) Impact on the filesystem. The PathFlowAI method of parallel processing followed by centralized storage saves both time and storage space

3

Fig. 3. Segmentation: Original (a) Annotations Compared to Predicted (b) Annotations; (c) Pathologist annotations guided by the classification model

4

Fig. 4. Portal Classification Results: a) Darker tiles indicate a higher assigned probability of portal classification, b) AUC-ROC curves for the test images that estimate overall accuracy given different sensitivity cutoffs, c) H&E patch (left) with corresponding SHAP interpretations (right) for four patches; the probability value of portal classification is shown, and on the SHAP value scale, red indicates regions that the model attributes to portal prediction, d) Model trained UMAP embeddings of patches colored by original portal coverage (area of patch covered by portal) as judged by pathologist and visualization of individual patches

pathflowai's People

Contributors

dependabot[bot] avatar jlevy44 avatar sumanthratna avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pathflowai's Issues

pretrained model weights

Hello and thank you vey much for sharing your work!
I was wondering if you have provided the checkpoints for inferencing without training.
If not, would it be possible for you to share those files and the command for making the prediction?
Thank you very much in advance!
Lucía

Issues with segmentation training

After successfully running the processing, assuming your response to the other thread is that it's alright, I got into the following problem with running the training:

!CUDA_VISIBLE_DEVICES=0 pathflowai-train_model train_model --prediction --patch_size 512 -pr 224 --save_location outcomes_model.pkl -a resnet34 --input_dir /project/PFAI_inputs -nt 1 -t 10000 -lr 1e-4 -ne 10 -ss 0.5 -ssv 0.3 -tt 0.1 -bt 0.01 -imb -pi patch_information.db -bs 32 -ca
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
	Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.

So based on the following discussion I ran export MKL_SERVICE_FORCE_INTEL=1 and the following but got an error:

(saturn) jovyan@jupyter-assafmagen-2dpathflowai:~/project$ CUDA_VISIBLE_DEVICES=0 pathflowai-train_model train_model --prediction --patch_size 512 -pr 224 --save_location outcomes_model.pkl -a resnet34 --input_dir /project/PFAI_inputs/ -nt 1 -t 10000 -lr 1e-4 -ne 10 -ss 0.5 -ssv 0.3 -tt 0.1 -bt 0.01 -imb -pi patch_information.db -bs 32 -ca
nonechucks may not work properly with this version of PyTorch (1.5.0). It has only been tested on PyTorch versions 1.0, 1.1, and 1.2
/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/utils.py:605: SettingWithCopyWarning:


A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

()
Traceback (most recent call last):
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/fsspec/mapping.py", line 76, in __getitem__
    result = self.fs.cat(k)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/fsspec/spec.py", line 587, in cat
    return self.open(path, "rb").read()
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/fsspec/spec.py", line 774, in open
    **kwargs
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/fsspec/implementations/local.py", line 108, in _open
    return LocalFileOpener(path, mode, fs=self, **kwargs)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/fsspec/implementations/local.py", line 175, in __init__
    self._open()
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/fsspec/implementations/local.py", line 180, in _open
    self.f = open(self.path, mode=self.mode)
FileNotFoundError: [Errno 2] No such file or directory: '/project/PFAI_inputs/Li63N2DCLAMP.zarr/.zarray'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/zarr/core.py", line 150, in _load_metadata_nosync
    meta_bytes = self._store[mkey]
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/fsspec/mapping.py", line 80, in __getitem__
    raise KeyError(key)
KeyError: '.zarray'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/srv/conda/envs/saturn/bin/pathflowai-train_model", line 8, in <module>
    sys.exit(train())
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/model_training.py", line 309, in train_model
    train_model_(training_opts)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/model_training.py", line 37, in train_model_
    norm_dict = get_normalizer(training_opts['normalization_file'], dataset_opts)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/datasets.py", line 168, in get_normalizer
    dataset = DynamicImageDataset(**dataset_opts)#nc.SafeDataset(DynamicImageDataset(**dataset_opts))
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/datasets.py", line 320, in __init__
    self.slides = {slide:da.from_zarr(join(input_dir,'{}.zarr'.format(slide))) for slide in IDs}
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/datasets.py", line 320, in <dictcomp>
    self.slides = {slide:da.from_zarr(join(input_dir,'{}.zarr'.format(slide))) for slide in IDs}
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/dask/array/core.py", line 2842, in from_zarr
    z = zarr.Array(mapper, read_only=True, path=component, **kwargs)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/zarr/core.py", line 124, in __init__
    self._load_metadata()
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/zarr/core.py", line 141, in _load_metadata
    self._load_metadata_nosync()
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/zarr/core.py", line 152, in _load_metadata_nosync
    err_array_not_found(self._path)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/zarr/errors.py", line 25, in err_array_not_found
    raise ValueError('array not found at path %r' % path)
ValueError: array not found at path ''

FYI, it did add the file project/train_val_test.pkl

Thanks

No such command 'train_model'

I encountered the following issue with running the training:

!pathflowai-train_model train_model -h
nonechucks may not work properly with this version of PyTorch (1.5.0). It has only been tested on PyTorch versions 1.0, 1.1, and 1.2
Usage: pathflowai-train_model [OPTIONS] COMMAND [ARGS]...
Try 'pathflowai-train_model -h' for help.

Error: No such command 'train_model'.

Sparse vs dense annotation

My pathologist performed initial multi-class manual annotation via QuPath of 5 slides, selecting a few areas of interest with good quality of each category I wanted to characterize (basically prioritizing quality over quantity). I was assuming a more comprehensive annotation is needed for training, prioritizing quantity over quality. Please see a low res example of an annotation he made. I know that the optimal annotation strategy and required slide quantity isn't known but what is the current strategy too sparse? Is the algorithm ignoring the non annotated regions or considering them as background such that missing an annotation of tumor areas for example is going to decrease the accuracy?

Thanks

Dask DataLoader Speed (2.0 feature)

Background, dataloader slows down over time, especially when using a large number of slides; data that is persistent in memory loads quickly (case for very small number sslides), but not when training from large number of slides; issues with having .compute() within getitem(), yet needing to take into account data augmentations (albumentations) for the mask of the image for semantic segmentation task when loading data, which can make the dataloading operation if more daskified a bit more complex:

Issue is with the getitem, when the data is loaded, it passes quickly through the DL model.

Potentially nice ideas:

@lvaickus , can you comment more here?

@sumanthratna

Generating .npy input format for segmentation with a multi-channel tif

I collected some example data to start working with PFAI, but I need to find a workaround the input formatting. I have a set of tif files where each file has 6 channels corresponding to (1) background (2) tumor (3) normal (4) stroma etc. The libraries I typically use like CV2 and similar counterparts handle only three channels, and the resources I found online seem to discuss only saving 4+ channel files rather than reading and converting. Do you know can I convert my input files to the requested npy format for PFAI segmentation?

Thanks

Registration

Hello @jlevy44

Is this pipeline working with registration of different marker stains of the same tissue?
I noticed you encountered a similar problem to what I am experiencing with drop2 where the registered image output is in gray scale and in order to get the original RGB image warped I need to apply the deformation fields using griddata function which doesn't scale to anything close to the size of pathology images. One solution is of course tiling but the process of combining their overlap will be very tedious. I hope you have an insight into this problem.

Thanks

Dask_image issue/bug

Hi @jlevy44

The same script I used successfully last week to preprocess the images isn't working now with the following error (which is not due to not having the relevant packages installed as you see in the first two install commands):

(saturn) jovyan@jupyter-assafmagen-2dpathflowai:~$ /srv/conda/envs/saturn/bin/pip install dask_imageRequirement already satisfied: dask_image in /srv/conda/envs/saturn/lib/python3.6/site-packages (0.2.0)
Requirement already satisfied: dask[array]>=0.16.1 in /srv/conda/envs/saturn/lib/python3.6/site-packages (from dask_image) (2.12.0)
Requirement already satisfied: scipy>=0.19.1 in /srv/conda/envs/saturn/lib/python3.6/site-packages (from dask_image) (1.4.1)
Requirement already satisfied: pims>=0.4.1 in /srv/conda/envs/saturn/lib/python3.6/site-packages (from dask_image) (0.4.1)
Requirement already satisfied: numpy>=1.11.3 in /srv/conda/envs/saturn/lib/python3.6/site-packages (from dask_image) (1.18.1)
Requirement already satisfied: toolz>=0.7.3; extra == "array" in /srv/conda/envs/saturn/lib/python3.6/site-packages (from dask[array]>=0.16.1->dask_image) (0.10.0)
Requirement already satisfied: slicerator>=0.9.7 in /srv/conda/envs/saturn/lib/python3.6/site-packages (from pims>=0.4.1->dask_image) (1.0.0)
Requirement already satisfied: six>=1.8 in /srv/conda/envs/saturn/lib/python3.6/site-packages (from pims>=0.4.1->dask_image) (1.14.0)
(saturn) jovyan@jupyter-assafmagen-2dpathflowai:~$ pathflowai-preprocess preprocess_pipeline -odb patch_information.db --preprocess --patches --basename Li63NDCLAMP --input_dir /home/jovyan/project/PFAI_inputs --patch_size 512 --intensity_threshold 45. -tc 7 -t 0.05
nonechucks may not work properly with this version of PyTorch (1.5.0). It has only been tested on PyTorch versions 1.0, 1.1, and 1.2
Traceback (most recent call last):
  File "/srv/conda/envs/saturn/bin/pathflowai-preprocess", line 8, in <module>
    sys.exit(preprocessing())
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/cli_preprocessing.py", line 83, in preprocess_pipeline
    transpose_annotations=transpose_annotations)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/utils.py", line 322, in run_preprocessing_pipeline
    arr, masks = load_process_image(svs_file, xml_file, npy_mask, annotations, transpose_annotations)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/utils.py", line 267, in load_process_image
    arr = load_image(svs_file)#npy2da(svs_file) if (svs_file.endswith('.npy') or svs_file.endswith('.h5')) else svs2dask_array(svs_file, tile_size=1000, overlap=0)#load_image(svs_file)
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/utils.py", line 238, in load_image
    return (npy2da(svs_file) if (svs_file.endswith('.npy') or svs_file.endswith('.h5')) else svs2dask_array(svs_file, tile_size=1000, overlap=0))
  File "/srv/conda/envs/saturn/lib/python3.6/site-packages/pathflowai/utils.py", line 131, in svs2dask_array
    return dask_image.imread.imread(svs_file)
NameError: name 'dask_image' is not defined

Is this due to a recent update? (I'm installing the package in each new session so I always pull the most current version). What's the fastest way to address this?

Thanks

Testing example

Is there a small example dataset on which I can test if my environment is set up correctly?
Also, I get nonechucks may not work properly with this version of PyTorch (1.5.0). It has only been tested on PyTorch versions 1.0, 1.1, and 1.2 when running the preprocess command. Is this a big issue?

Preprocessing status and outputs

Hi @jlevy44

How do I know what's the status of the preprocessing procedure during and after execution and what are the outputs I should see in terms of files being written?

The following ran for a couple of seconds and just printed '512' at the end, I don't see any file inputs in the directory specified here.

command = 'pathflowai-preprocess preprocess_pipeline \
          -odb patch_information.db \
          --preprocess \
          --patches \
          --basename ' + stainID +'/ \
          --input_dir ' + PFAI_dir + ' \
          --patch_size 512 \
          --intensity_threshold 45. \
          -tc 7 \
          -t 0.05'
print(command)
os.system(command)

Output:

pathflowai-preprocess preprocess_pipeline           -odb patch_information.db           --preprocess           --patches           --basename Li63NDCLAMP/           --input_dir PFAI_inputs           --patch_size 512           --intensity_threshold 45.           -tc 7           -t 0.05
512

And these are the input files I have in that folder (just one sample for now to test if it works):
Screen Shot 2020-05-26 at 7 17 02 PM

And to make sure my plan is compatible with this function, I plan to run it as a last step in a loop that processes each slide from ndpi separately and preparing the input to PFAI. That's assuming the PFAI preprocess command will concatenate the data when it's being called on new slides. I'll just remove the '--preprocess' flag after the first iteration. In that context, everytime I run the command with --preprocess I basically instruct it to redefine the database? Does it delete the old one? And where are they stored?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.