Giter Club home page Giter Club logo

impartial's Introduction


Interactive deep learning whole-cell Image segmentation and thresholding using Partial annotations

Read Link | Report Bug | Request Feature

Segmenting noisy multiplex spatial tissue images is a challenging task, since the characteristics of both the noise and the biology being imaged differs significantly across tissues and modalities; this is compounded by the high monetary and time costs associated with manual annotations. It is therefore important to create algorithms that can accurately segment the noisy images based on a small number of expert annotations. With ImPartial, we have developed an interactive deep learning algorithm to perform segmentation using as few as 2-3 training images with minimal user-provided scribbles. ImPartial augments the segmentation objective via self-supervised multi-channel quantized imputation, meaning that each class of the segmentation objective can be characterized by a mixture of distributions. This is based on the observation that perfect pixel-wise reconstruction or denoising of the image is not needed for accurate segmentation, and hence a self-supervised classification objective that better aligns with the overall segmentation goal suffices. We demonstrate the superior performance of our approach for a variety of datasets acquired with different highly-multiplexed imaging platforms. ImPartial has been optimized to train in less than 5 minutes on a low-end GPU. With MONAI-Label integration, cloud (Amazon Web Services) deployment, and a user-friendly ImageJ/Fiji plugin/interface, ImPartial can be run iteratively on a new user-uploaded dataset in an active learning human-in-the-loop framework with a no-code execution. A new multi-user support scheme is deployed as well to allow users to sign-up/authenticate and simultaneously use our cloud resources with capabilities to end/restore user sessions and develop/share new models with the wider community as needed (hopefully resulting in an ImPartial-driven marketplace in the future where users can share their models with the wider community while being properly credited for their hard work).

Pipeline

unet_arch Each image patch is separated into an imputation patch and a blind spot patch. The blind spot patch is fed through the U-Net to recover the component mixture and the component statistics. The latter statistics are averaged across the entire patch to enforce component consistency. Both the component statistics and component mixture are used to compute the mixture loss for the patch. Simultaneously, a scribble containing a small number of ground truth segmentations for the patch is used to compute the scribble loss. Both losses propagate gradients back to the U-Net architecture on the backward pass. Additional scribbles can be added to fine-tune the model trained in the previous iteration. Uncertainty maps are computed and shown to guide the user to provide additional scribblles in high uncertainty regions.

ImPartial, MONAI-Label & Fiji Intergration

We have transitioned from research to a production-ready environment/workflow, where in the user can upload images, provide scribbles, and run deep learning based model training and inference. We have utilized the following three components to provide an end-to-end service:

  1. ImageJ/Fiji - This acts as a client with a user-friendly interface. User can add, delete, or modify annotations to the uploaded image dataset. We opted for ImageJ/Fiji interface due to its easy extensibility and large user-base (100,000+ active users).

  2. MONAI-Label - For backend, we used MONAI-Label which is a Pytorch based open-source framework for deep learning in medical imaging. It provided out-of-the-box inferface to plug in ImPartial deep learning pipeline via Restful API that ties together training, inference and active learning iterative sample-selection strategy. Active learning approach: MONAI-Label suports an active learning based approch for users to iteratively train and fine-tune models. We use uncertainty maps to show users the quality of the results every few epochs.

  3. Amazon Web Services (AWS) Cloud Deployment - We deployed ImPartial using the AWS cloud platform with MONAI-Label backend to deploy ImPartial as a service and support multiple users simultaneously.

pipline_impartial_fig

This workflow diagram illustrates the interactive and iterative nature of the ImPartial pipeline, allowing users to actively contribute to the segmentation model's improvement through annotation and fine-tuning. The combination of user input and deep learning enables more accurate and adaptive whole cell image segmentation. (1.) Setup: The workflow begins with the user interacting with the ImPartial plugin through the Fiji app to connect to an ImPartial endpoint or a local server which runs MONAI label as its core backend service. User uploads images to the tool which are stored on our cloud storage system, such as Amazon S3, and a backend MONAI datastore. (2.) Scribbles: For each uploaded image, the user utilizes Fiji's draw tool feature to manually mark cell boundaries for a small number of cells. This annotation process allows the user to provide initial guidance to the segmentation algorithm. (3.) Submit Scribbles: Once the cell boundaries are marked, the user submits the annotations (scribbles) to the system. (3.1) These scribbles are linked and stored alongside original images. (3.2.) Training configuration: The user can configure the machine learning training job by tuning hyper-parameters such as the number of epochs, learning rate, and other relevant parameters. (4.) Initiate Training Job: With the training parameters set, the user initiates an asynchronous training job which will utilize the annotated data alongside image denoising to train a segmentation model. The progress of the training can be monitored in real-time via the plugin. (4.1) Model Update: During training, multiple image segmentation metrics are logged and the newly trained, better performing model is stored. (4.2) Model Inference: Since, the ImPartial workflow is asynchronous, model inference can be run any time during and after the training to obtain predictions for cell segmentation on new, unlabeled data. (5.) Visualization of Results: The user can visualize the results of the segmentation model. This includes viewing the provided images, scribbles, model predictions, and entropy (uncertainty) maps simultaneously on a single canvas. This visualization aids in understanding the model's performance and identifying areas of high uncertainty in the segmentation. (6.) Iterative Refinement: Finally, users can add additional scribbles or annotations based on the visualization results. With the new annotations, the training is re-initiated triggering fine-tuning of the existing model with the new data.

ImPartial Installation:

MONAI Label

Pre-requisites

  • Python 3

Install Python dependencies in a virtual environment using pip

python3 -m venv venv
source venv/bin/activate
pip install -U pip && pip install -r requirements.txt

Run MONAI-Label app

cd impartial
monailabel start_server -a api -s <data-dir>

and navigate to http://localhost:8000 to access the Swagger UI for interactive API exploration.

MONAI-Label in Docker

Build the docker image

docker build -t monailabel/impartial .

run the image built above

docker run -d --name impartial -p 8000:8000 monailabel/impartial monailabel start_server -a api -s /opt/monai/data

and navigate to http://localhost:8000

ImageJ/Fiji Plugin

Pre-requisites

First, package the plugin. From the repo root directory

cd imagej-plugin
mvn clean package

and copy the .jar file into Fiji's plugins directory. For example, if you're using macOS

cp target/impartial_imagej-0.1.jar /Applications/Fiji.app/plugins

then restart Fiji and open ImPartial from the Plugins menu bar.

No-Code Cloud Execution

For ready-to-use ImPartial plugin, user can get a pre-compiled .jar file here. With this option, you can skip setting up Maven and compiling the package. Copy the .jar file directly to the Fiji plugin folder mentioned above.

User can request our cloud deployed MONAI server to readily annotate and segment the data without needing to compile or run any code locally.

A detailed guide for the Fiji plugin can be found here.

Issues

Please report all issues on the public forum.

License

© Nadeem Lab - ImPartial code is distributed under Apache 2.0 with Commons Clause license, and is available for non-commercial academic purposes.

Reference

If you find our work useful in your research or if you use parts of this code, please cite our paper:

@article {Martinez2021.01.20.427458,
	author = {Martinez, Natalia and Sapiro, Guillermo and Tannenbaum, Allen and Hollmann, Travis J. and Nadeem, Saad},
	title = {ImPartial: Partial Annotations for Cell Instance Segmentation},
	elocation-id = {2021.01.20.427458},
	year = {2021},
	doi = {10.1101/2021.01.20.427458},
	publisher = {Cold Spring Harbor Laboratory}
}

impartial's People

Contributors

dependabot[bot] avatar gunjan-sh avatar jommarin avatar natalialmg avatar ricdodds avatar sachidanandalle avatar sanadeem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

impartial's Issues

Plugin Error while running Inference

While running inference, the plugin gives a 504 error. Error pops up after Infer runs on a few images and then stops. Error is independent of the image. The behavior is random, but able to reproduce on .png format images on cloud server. Typically seeing this if dataset has more than 8-9 images. Error is only seen on Cloud server not on local deployment.

Evaluation of Impartial model performance within Monai:

Impartial is a semi-supervised deep learning method which does whole cell segmentation. With using minimal number of scribbles by an expert pathologist.
In order to evaluate the performance of the impartial models during and after training there are 2 ways:

  1. Human expert in the loop feedback, where the user reviews the inference results from the model, identifies the erroneous cells and provide additional scribbles and start model training again. (This is current working of Impartial with the human-in-the-loop)

  2. In order to quantitatively measure the model's performance, many times pathologists have "fully annotated" images along with unlabeled data. These full labelled test images can be used to track model performance. Hence, we wish to incorporate typical evaluation metrics into our Impartial-Monailabel framework.

    Currently, monai supports "image" (upload_image) and "label" (save_label) APIs, we utilize them to submit "image" and "scribbles" respectively. Essentially, we utilize "label" attribute to submit the "scribbles"

    • We were hoping if there was a way to submit a "ground truth" label OR another attribute like "scribble" be added? OR utilize "tag" parameter of "save_label" API?
      For example:
          "image2": {
                "image": {
                  "ext": ".tif",
                  "info": {
                    "name": "image2.tif"
                  }
                },
                "labels": {
                  "gt": {
                    "ext": ".zip",
                    "info": {
                      "name": "image2_gt.zip"
                    }
                  },
                  "scribble": {
                    "ext": ".zip",
                    "info": {
                      "name": "image2_scribble.zip"
                    }
                  }
         }
    • Add API like "save_ground_truth()"
    • Add API like Evaluate(image, ground_truth) or evaluate() ?

can't start monailabel server using `-a api`

Hi!
After I run:

cd impartial
monailabel start_server -a api -s ~/ImPartial/Data/Vectra_WC_2CH_tiff/

I get error:

[2022-11-22 09:48:21,068] [1509998] [MainThread] [INFO] (uvicorn.error:75) - Started server process [1509998]
[2022-11-22 09:48:21,068] [1509998] [MainThread] [INFO] (uvicorn.error:45) - Waiting for application startup.
[2022-11-22 09:48:21,069] [1509998] [MainThread] [INFO] (monailabel.interfaces.utils.app:38) - Initializing App from: /home/hju/ImPartial/impartial/api; studies: /home/hju/ImPartial/Data/Vectra_WC_2CH_tiff; conf: {}
[2022-11-22 09:48:21,092] [1509998] [MainThread] [INFO] (monailabel.utils.others.class_utils:37) - Subclass for MONAILabelApp Found: <class 'main.Impartial'>
[2022-11-22 09:48:21,105] [1509998] [MainThread] [ERROR] (uvicorn.error:119) - Traceback (most recent call last):
  File "/home/hju/anaconda3/envs/monailabel-impartial-env/lib/python3.9/site-packages/starlette/routing.py", line 635, in lifespan
    async with self.lifespan_context(app):
  File "/home/hju/anaconda3/envs/monailabel-impartial-env/lib/python3.9/site-packages/starlette/routing.py", line 530, in __aenter__
    await self._router.startup()
  File "/home/hju/anaconda3/envs/monailabel-impartial-env/lib/python3.9/site-packages/starlette/routing.py", line 612, in startup
    await handler()
  File "/home/hju/anaconda3/envs/monailabel-impartial-env/lib/python3.9/site-packages/monailabel/app.py", line 104, in startup_event
    instance = app_instance()
  File "/home/hju/anaconda3/envs/monailabel-impartial-env/lib/python3.9/site-packages/monailabel/interfaces/utils/app.py", line 51, in app_instance
    app = c(app_dir=app_dir, studies=studies, conf=conf)
  File "/home/hju/ImPartial/impartial/api/main.py", line 25, in __init__
    for c in get_class_names(lib.configs, "TaskConfig"):
  File "/home/hju/anaconda3/envs/monailabel-impartial-env/lib/python3.9/site-packages/monailabel/utils/others/class_utils.py", line 144, in get_class_names
    module = importlib.import_module("." + name, package=current_module_name)
  File "/home/hju/anaconda3/envs/monailabel-impartial-env/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/home/hju/ImPartial/impartial/api/lib/configs/impartial.py", line 5, in <module>
    import lib.infers
  File "/home/hju/ImPartial/impartial/api/lib/infers/__init__.py", line 1, in <module>
    from .impartial import Impartial
  File "/home/hju/ImPartial/impartial/api/lib/infers/impartial.py", line 25, in <module>
    from monailabel.interfaces.tasks.infer_v2 import InferType
ModuleNotFoundError: No module named 'monailabel.interfaces.tasks.infer_v2'

I pip installed monailabel and other packages in requirements.txt within a conda environment, as when I tried to follow the python virtual environment setup in the readme https://github.com/nadeemlab/ImPartial#monai-label, openslide-python can't be installed. And according to https://openslide.org/api/python/#installing , it seems to better install openslide using some package manger like Anaconda.

I've done some testing on the monailabel installation. E.g., I can start a monailabel server if using a monailabel sample pathology application:

monailabel start_server -a apps/pathology/ -s ~/ImPartial/Data/Vectra_WC_2CH_tiff/

Also, using monailabel radiology sample application together with 3DSlicer also works. Any ideas?

Deploy ImPartial as a service

After implementing the original ImPartial pipelines as a MONAI Label app and developing an ImageJ/Fiji plugin that interacts with this API, we now need define the AWS infrastructure that would allow multiple users to benefit from the system.

Some of the requirements are:

  • The service is publicly available but with restricted access. The restrictions are not completely defined yet but some ideas are to limit the number of iteration to 3, which is equivalent to give the user access to train for around 300 epochs. Another restriction would be to limit the availability time on 2hrs per session.
  • The user interacting with ImPartial would have access to a dedicated GPU resource within the restrictions mentioned above.
  • The user will upload their dataset and submit annotations through the ImageJ plugin.
  • Once the session is over, the user will be able to download the labels for the full dataset and the last checkpoint of the trained model.
  • In exchange for this free resource, we (ImPartial) will store all of the uploaded dataset, submitted labels and trained model.

Error Loading images on ImageJ plugin when running on 10.0.3.117 server

Running the Monai server with DAPI dataset images. Connected and ran the latest imageJ plugin.
The plugin doesn't show any raw images in the drop down and giving following error.

WARNING] Ignoring unsupported output: dialog [org.nadeemlab.impartial.ImpartialDialog] Exception in thread "AWT-EventQueue-0" java.security.PrivilegedActionException: java.security.PrivilegedActionException: org.json.JSONException: JSONObject["id"] not found. at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74) at java.awt.EventQueue.dispatchEvent(EventQueue.java:730) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:205) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93) at java.awt.EventDispatchThread.run(EventDispatchThread.java:82) Caused by: java.security.PrivilegedActionException: org.json.JSONException: JSONObject["id"] not found. at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:74) at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:84) at java.awt.EventQueue$4.run(EventQueue.java:733) at java.awt.EventQueue$4.run(EventQueue.java:731)

error

interactive annotation & install ImageJ plugin

Hi there!
Really appreciate you put much effort in building ImPartial!

I want to try out ImPartial on my digital pathology data, especially want to use the interactive annotation feature. Is this feature available now or is still under active development?

I'm following the Monai-label and Impartial Integration to set up everything. In Client - Install ImageJ plugin, the command git checkout feat/imagej-plugin outputs error: pathspec 'feat/imagej-plugin' did not match any file(s) known to git, and from git branch -a it seems this branch is not there.

Besides, I can't find the impartial_imagej-0.1.jar used in:

Copy the latest build imagej plugin from 'ImPartial/imagej-plugintarget/impartial_imagej-0.1.jar' to '/Applications/Fiji.app/plugins'

demo for Fiji plugin

Hi!
If you could have a demo recording showing how to use Fiji plugin and MONAI Label server (especially the DeepEdit interactive annotation functionality), that would be great!

How to start from weak annotations in numpy arrays?

Hello,

I am interested in ImPartial, not sure if it will solve my problem. I found the documentation alittle bit difficult to follow, so I appreciate if there is a recommended workflow.

I have weak annotations make by color thresholding for bright field images. I am try to enhance these labels. Can I use them as input instead of scribbles? Also, can will I interact with the output after that?

Thanks in advance!

Training time optimization

As an interactive service, ImPartial should allow users to train the model over their datasets in an efficient way, ideally under 5 mins using a single GPU. This training time is measured using 100 epochs and ~4000 sample patches per iteration.

As of today, it takes around 15 mins with the configurations mentioned above on a 4 GPUs machine.

sample images for using Fiji plugin

Hi!
Could you point me to some sample digital pathology images to download so that I can try out the Fiji plugin? My own current data is in .svs format and can't be opened by Fiji.

Questions new user

Hi, thanks for this great model.
However I have some questions.

How do you provide multi class labels groundtruths? Do you use different grayscale level on a single image to represent the different labels or for each image of training one need a groundtruth image for each label (1 for nuclei and 1 for cytoplasm).

Which input image size does one need? i believe 512*512 but can it be bigger?

Why does one need to build the Fiji plugin locally and you don't provide directly the Fiji Plugin from Update site?

Thanks a lot for your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.