Giter Club home page Giter Club logo

qupath-extension-wsinfer's Introduction

Image.sc forum Total downloads Latest release downloads Paper Twitter

QuPath

QuPath is open source software for bioimage analysis.

Features include:

  • Lots of tools to annotate and view images, including whole slide & microscopy images
  • Workflows for brightfield & fluorescence image analysis
  • New algorithms for common tasks, including cell segmentation, tissue microarray dearraying
  • Interactive machine learning for object & pixel classification
  • Customization, batch-processing & data interrogation by scripting
  • Easy integration with other tools, including ImageJ

To download QuPath, go to the Latest Releases page.

For documentation, see https://qupath.readthedocs.io

For help & support, try image.sc or the links here

To build QuPath from source see here.

If you find QuPath useful in work that you publish, please cite the publication!

QuPath is an academic project intended for research use only. The software has been made freely available under the terms of the GPLv3 in the hope it is useful for this purpose, and to make analysis methods open and transparent.

Development & support

QuPath is being actively developed at the University of Edinburgh by:

Past QuPath dev team members:

  • Melvin Gelbard
  • Mahdi Lamb

For all contributors, see here.

This work is made possible in part thanks to funding from:


Background

QuPath was first designed, implemented and documented by Pete Bankhead while at Queen's University Belfast, with additional code and testing by Jose Fernandez.

Versions up to v0.1.2 are copyright 2014-2016 The Queen's University of Belfast, Northern Ireland. These were written as part of projects that received funding from:

  • Invest Northern Ireland (RDO0712612)
  • Cancer Research UK Accelerator (C11512/A20256)

Image

qupath-extension-wsinfer's People

Contributors

alanocallaghan avatar finglis avatar kaczmarj avatar petebankhead avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

qupath-extension-wsinfer's Issues

extension not using GPU (but deep java library example is using GPU)

i am yet again battling with GPU support on Windows. it's a real shame because it was working a few weeks ago, and now it is not. i am not sure what changed on my machine, but i'm debugging now. this may become a long post but i figured i'd document everything in case it helps anyone else (or me in a few weeks 😄)

first i tested that a deep java library example can run and uses the gpu. i used the code below, adapted from https://docs.djl.ai/jupyter/load_pytorch_model.html. i ran all of this using the qupath script editor, after installing the pytorch djl engine via qupath. it ran and used the gpu. i monitored gpu utilization in task manager.

groovy code for running djl example
// from https://docs.djl.ai/jupyter/load_pytorch_model.html
import java.nio.file.*;
import java.awt.image.*;
import ai.djl.*;
import ai.djl.inference.*;
import ai.djl.modality.*;
import ai.djl.modality.cv.*;
import ai.djl.modality.cv.util.*;
import ai.djl.modality.cv.transform.*;
import ai.djl.modality.cv.translator.*;
import ai.djl.repository.zoo.*;
import ai.djl.translate.*;
import ai.djl.training.util.*;

DownloadUtils.download("https://djl-ai.s3.amazonaws.com/mlrepo/model/cv/image_classification/ai/djl/pytorch/resnet/0.0.1/traced_resnet18.pt.gz", "resnet18.pt", new ProgressBar());

DownloadUtils.download("https://djl-ai.s3.amazonaws.com/mlrepo/model/cv/image_classification/ai/djl/pytorch/synset.txt", "synset.txt", new ProgressBar());

translator = ImageClassificationTranslator.builder()
        .addTransform(new Resize(256))
        .addTransform(new CenterCrop(224, 224))
        .addTransform(new ToTensor())
        .addTransform(new Normalize(
            new float[] {0.485f, 0.456f, 0.406f},
            new float[] {0.229f, 0.224f, 0.225f}))
        .optApplySoftmax(true)
        .build();
        
criteria = Criteria.builder()
        .setTypes(Image.class, Classifications.class)
        .optModelPath(Paths.get("resnet18.pt"))
        .optOption("mapLocation", "true") // this model requires mapLocation for GPU
        .optTranslator(translator)
        .optProgress(new ProgressBar()).build();

model = criteria.loadModel();

var img = ImageFactory.getInstance().fromUrl("https://raw.githubusercontent.com/pytorch/hub/master/images/dog.jpg");
predictor = model.newPredictor();

// run inference many times and watch gpu utilization.
for (x in (1..1000)) {
    classifications = predictor.predict(img);
}

after i found out that this worked, i tried updating the wsinfer qupath extension to add some logging and figure out what device is being used (after line

). interestingly, CPU was being used...

i will update as i debug further

Support offline use

Currently (v0.1.0) the extension always required to be online. If unable to query the models online, the stage can't even be initialized.

This requirement should be relaxed so that the extension works when offline, provided models have been downloaded and are available.

My suggestion would be to cache any relevant files near to the models. This can also be used for the SHA checking (storing the relevant file with the model, rather than relying upon timestamps).

ERROR: : Failed to load PyTorch native library

I'm getting a "Failed to load PyTorch native library" error. Seems like it's trying to use ....djl.ai\pytorch\1.13.0-cu117-win-x86_64\torch_cuda_cpp.dll. Also, GPU does not show up as an option on the Qupath WSInfer "Preferred device:". Any way to run this from a venv for example with better control over the torch/cuda versions being installed/utilized (however, I guess the installed torch/cuda version isn't relevant as all is fully run by the djl.ai\pytorch)? Haven't tried different versions of http://djl.ai/engines/pytorch/pytorch-engine/, but I this seems to be automatically installed/chosen by the extension or is that worth trying?

INFO: : Downloading PyTorch engine...
ERROR: Failed to load PyTorch native library
ai.djl.engine.EngineException: Failed to load PyTorch native library
    at ai.djl.pytorch.engine.PtEngine.newInstance(PtEngine.java:85)
    at ai.djl.pytorch.engine.PtEngineProvider.getEngine(PtEngineProvider.java:40)
    at ai.djl.engine.Engine.getEngine(Engine.java:186)
    at qupath.ext.wsinfer.ui.PytorchManager.lambda$getEngineOnline$1(PytorchManager.java:90)
    at qupath.ext.wsinfer.ui.PytorchManager.callWithTempProperty(PytorchManager.java:124)
    at qupath.ext.wsinfer.ui.PytorchManager.callOnline(PytorchManager.java:116)
    at qupath.ext.wsinfer.ui.PytorchManager.getEngineOnline(PytorchManager.java:90)
    at qupath.ext.wsinfer.ui.WSInferController$WSInferTask.call(WSInferController.java:392)
    at qupath.ext.wsinfer.ui.WSInferController$WSInferTask.call(WSInferController.java:369)
    at javafx.concurrent.Task$TaskCallable.call(Task.java:1426)
    at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
    at java.base/java.lang.Thread.run(Unknown Source)
  Caused by C:\Users\IT-bruker\.djl.ai\pytorch\1.13.0-cu117-win-x86_64\torch_cuda_cpp.dll: Den angitte prosedyren ble ikke funnet        at java.base/jdk.internal.loader.NativeLibraries.load(Native Method)
        at java.base/jdk.internal.loader.NativeLibraries$NativeLibraryImpl.open(Unknown Source)
        at java.base/jdk.internal.loader.NativeLibraries.loadLibrary(Unknown Source)
        at java.base/jdk.internal.loader.NativeLibraries.loadLibrary(Unknown Source)
        at java.base/java.lang.ClassLoader.loadLibrary(Unknown Source)
        at java.base/java.lang.Runtime.load0(Unknown Source)
        at java.base/java.lang.System.load(Unknown Source)
        at ai.djl.pytorch.jni.LibUtils.loadNativeLibrary(LibUtils.java:368)
        at ai.djl.pytorch.jni.LibUtils.loadLibTorch(LibUtils.java:174)
        at ai.djl.pytorch.jni.LibUtils.loadLibrary(LibUtils.java:78)
        at ai.djl.pytorch.engine.PtEngine.newInstance(PtEngine.java:54)
        at ai.djl.pytorch.engine.PtEngineProvider.getEngine(PtEngineProvider.java:40)
        at ai.djl.engine.Engine.getEngine(Engine.java:186)
        at qupath.ext.wsinfer.ui.PytorchManager.lambda$getEngineOnline$1(PytorchManager.java:90)
        at qupath.ext.wsinfer.ui.PytorchManager.callWithTempProperty(PytorchManager.java:124)
        at qupath.ext.wsinfer.ui.PytorchManager.callOnline(PytorchManager.java:116)
        at qupath.ext.wsinfer.ui.PytorchManager.getEngineOnline(PytorchManager.java:90)
        at qupath.ext.wsinfer.ui.WSInferController$WSInferTask.call(WSInferController.java:392)
        at qupath.ext.wsinfer.ui.WSInferController$WSInferTask.call(WSInferController.java:369)
        at javafx.concurrent.Task$TaskCallable.call(Task.java:1426)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
WARN: Simple repository pointing to a non-archive file.
ERROR: : Failed to load PyTorch native library

Failed to download PyTorch native library

I've followed the installation steps for this extension and when it comes to installing PyTorch it fails with

11:27:31.954	[wsinfer1]	ERROR	qupath.ext.wsinfer.ui.PytorchManager	Failed to download PyTorch native library	ai.djl.engine.EngineException: Failed to download PyTorch native library
	at ai.djl.pytorch.jni.LibUtils.downloadPyTorch(LibUtils.java:481)
	at ai.djl.pytorch.jni.LibUtils.findNativeLibrary(LibUtils.java:292)
	at ai.djl.pytorch.jni.LibUtils.getLibTorch(LibUtils.java:92)
	at ai.djl.pytorch.jni.LibUtils.loadLibrary(LibUtils.java:80)
	at ai.djl.pytorch.engine.PtEngine.newInstance(PtEngine.java:53)
	at ai.djl.pytorch.engine.PtEngineProvider.getEngine(PtEngineProvider.java:40)
	at ai.djl.engine.Engine.getEngine(Engine.java:190)
	at qupath.ext.wsinfer.ui.PytorchManager.lambda$getEngineOnline$1(PytorchManager.java:91)
	at qupath.ext.wsinfer.ui.PytorchManager.callWithTempProperty(PytorchManager.java:125)
	at qupath.ext.wsinfer.ui.PytorchManager.callOnline(PytorchManager.java:117)
	at qupath.ext.wsinfer.ui.PytorchManager.getEngineOnline(PytorchManager.java:91)
	at qupath.ext.wsinfer.ui.WSInferController$WSInferTask.call(WSInferController.java:548)
	at qupath.ext.wsinfer.ui.WSInferController$WSInferTask.call(WSInferController.java:509)
	at javafx.concurrent.Task$TaskCallable.call(Task.java:1426)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.nio.file.AccessDeniedException: C:\Users\oburri\.djl.ai\pytorch\tmp6876596744470295850 -> C:\Users\oburri\.djl.ai\pytorch\2.0.1-cpu-win-x86_64
	at java.base/sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
	at java.base/sun.nio.fs.WindowsFileCopy.move(Unknown Source)
	at java.base/sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source)
	at java.base/java.nio.file.Files.move(Unknown Source)
	at ai.djl.util.Utils.moveQuietly(Utils.java:132)
	at ai.djl.pytorch.jni.LibUtils.downloadPyTorch(LibUtils.java:478)
	... 17 more

I tried changing the rights on that folder (C:\Users\oburri\.djl.ai) to something fully permissive but to no avail. Any hints on how I can tackle this?

Thanks

Oli

Batch inference issues when ROI close to image edge

When the ROI for inference is close (or touches) the image edge, I get the following error:
image

I can reproduce it on both the OpenSlide and BioFormats backends. With the new version of the wsinfer extension (v0.3.0), if I set a batch size = 1, I don't get the error but whenever inference encounters a tile on the edge, it is markedly slower (I can see the hiccups in the progress bar: at first once every few batches [because only some batches will end with a bottom-edge tile] and then for each batch [since on the last column, each batch has an edge tile]).

With batch size >1, if after the error I look at the detections, I can see detections up to and excluding the first batch containing an edge tile.

example video showing the behavior with batch size 4 and then 1.

Check SHA256 model file vs huggingface version

Thought during zoom meeting; more sensible than re-downloading mindlessly

change it so the download button is enabled if the file doesn't exist, or if the checksum doesn't match, and remove the auto-download behaviour

Handling out-of-bounds coordinate requests

WSInfer seems to handle incorrectly-sized tiles okayish, based on the fact that I originally had trimToRoi set rather than trimToROI, meaning that the tiler plugin was using the default. This results in some smaller tiles as they're trimmed to the tile boundary.

Based on that I'd tend to submit smaller tiles when x + width >= server.getWidth() || y + height >= server.getHeight(). When x < 0 || y < 0 and !(x + width >= server.getWidth() || y + height >= server.getHeight()) then I guess do similarly

// TODO: Handle out-of-bounds coordinates!

strided tiles (RFC)

Simply a request for comments.

In my experience I often find strided inference (i.e. overlapping tiles) to be more informative than inference on a grid of non-overlapping tiles, like WSInfer and its qupath extension currently do. This could be not-too-hard to implement in the wsinfer window.

I don't know if this requires changes on the WSInfer backend (I think not), however I have no idea how QuPath handles overlapping detections.

Support local models

I suspect people will want to run models that they have locally, without needing them to be on the zoo.

@kaczmarj is there any pattern for doing this with WSInfer generally that we should follow?

Ensure strings are externalized

There are quite a lot of strings in WSInferController directly*, when they should be in a properties file.

*-I think most/all are thanks to me... mostly through dialog messages.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.