Giter Club home page Giter Club logo

hovernet_inference's People

Contributors

jgamper avatar m-shaban avatar simongraham avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hovernet_inference's Issues

Time taken to process a RoI

Hello,
I am running inference on a RoI with size 4548ร—7520. The time taken to finish processing a single RoI is around 20 minutes. Is it the normal behavior ? I can see that the most of the time taken is from the process_instance function in process_utils.py The main reason is the number of pred_id_list is so huge.

Unable to run usage.ipynb

hello!
i tried to run usage.ipyng, but it seems that the lodaing of model caused errors.
In hover/model/graph.py, line 97, the ModelDesc cannot be found. Could you please tell me how to figure out that?

Bug in code- over-segmenting nuclei

In the code, there is a bug where the results are over-segmenting the nuclei when processing the WSI with 'wsi' mode. This doesn't happen when using the 'roi' mode.

Labels

class_value: integer denoting the class of object
"""
if class_value == 0:
return 0, 0, 0 # black (background)
if class_value == 1:
return 255, 0, 0 # red
elif class_value == 2:
return 0, 255, 0 # green
elif class_value == 3:
return 0, 0, 255 # blue
elif class_value == 4:
return 255, 255, 0 # yellow
elif class_value == 5:
return 255, 165, 0 # orange
elif class_value == 6:

@simongraham, just been thinking about it - what classes to digits correspond to? Original pannuke data training weights labels were

0: Neoplastic cells,
1: Inflammatory,
2: Connective/Soft tissue cells,
3: Dead Cells,
4: Epithelial,
6: Background

How does that compare with the digits of visualisation? clearly background iz 6 -> 0.

Interpret the hover-net result .npz file

@simongraham Hi. I tried the hover net inference on some patient WSIs. I can run the code sucessfully, and get a .npz file as the result. But I don't know how to interpret it, and overlap the predictions on the orginal WSI images.

For example, I use one WSI for test, which has a level 0 dimension 43774 * 129274 (height * width). I used the suggested command to run hover-net

python run.py --gpu='0' --mode='wsi' --model='hovernet.npz' --input_dir='wsi_dir' --output_dir='output' --return_masks

The result .npz file has 3 arrays. The array for mask has a shape (118445, ) and each element of the array is a matrix with a shape (11, 7). And there are also 118,445 centroids and predicted types. I'm confused at these arrays, and don't know how to transform them back to the original level 0 dimension.

I found little instructions about it. Maybe it's a easy job for you, but please forgive me as I'm a clinical staff not a computer scientist.

Pannuke weights?

I am interested in using the pannuke weights for a project I am currently working on but they seem to have disappeared from the linked Google Drive Folder. Is there another way I can access said weights? Thank you!

Output continuous probabilities & Transfer learning

@simongraham Hi. Thanks for helping with displaying the hover-net nuclei segmentation result.

We evaluated the hover-net performance on TCGA and our own dataset. The performance of nuclei segmentation and classification is good on TCGA dataset, as the hover-net training dataset includes some TCGA data. The nuclei segmentation is acceptable on our own dataset, but the classification of nuclei type is not satisfying. I guess the reason is because of image quality as well as color normalization.

So now I have two new issues.

  1. Can I get a continuous probabilities rather than a nuclei type from hover-net? Then I can pick some cancer nuclei with high confidence for downstream analysis.

  2. If I want to do transfer learning based on the hover-net inference, and I do not want to change the model of nuclei segmentation. I think maybe updating some parameters of the output layer of classification model only would be ok. Is it a way to do it?

Thanks.

Verbose option?

Currently, when running the code, it will print out a whole load of stuff that is of no concern to the user -is there a verbose option to avoid printing all of this?

amma:0, hv/u2/dense/blk/2/preact_bna/bn/variance/EMA:0, hv/u3/dense/blk/6/preact_bna/bn/beta:0, hv/u3/dense/blk/5/conv1/bn/beta:0, tp/u2/dense/blk/3/conv2/W:0, group1/block0/conv2/bn/beta:0, hv/u3[8/1854]
lk/1/conv1/bn/variance/EMA:0, hv/u3/dense/blk/2/preact_bna/bn/beta:0, tp/u3/dense/blk/2/preact_bna/bn/beta:0, hv/u2/dense/blk/1/conv1/W:0, hv/u2/dense/blk/2/conv1/bn/mean/EMA:0, preact_out_np/bn/beta:0, g
roup2/block5/conv1/bn/mean/EMA:0, hv/u3/dense/blk/0/preact_bna/bn/variance/EMA:0, hv/u2/dense/blk/0/conv1/bn/beta:0, group3/block1/conv2/W:0, group0/block2/conv2/bn/variance/EMA:0, group1/block2/conv1/W:0
, tp/u3/dense/blk/5/conv1/bn/gamma:0, group2/block5/preact/bn/gamma:0, group0/block0/convshortcut/W:0, group1/block3/preact/bn/gamma:0, np/u2/dense/blk/0/conv1/bn/mean/EMA:0, np/u2/dense/blk/1/conv1/bn/me
an/EMA:0, group2/block3/conv2/bn/variance/EMA:0, np/u3/dense/blk/4/conv1/bn/variance/EMA:0, group3/block2/conv2/W:0, group1/block0/conv2/bn/variance/EMA:0, group1/block2/conv2/W:0, group0/block1/conv1/bn/
beta:0, hv/u3/dense/blk_bna/bn/variance/EMA:0, tp/u3/dense/blk/2/conv2/W:0, conv0/bn/gamma:0, group0/block1/preact/bn/variance/EMA:0, hv/u2/dense/blk/0/preact_bna/bn/mean/EMA:0, tp/u2/dense/blk/2/conv1/W:
0, hv/u2/dense/blk/1/conv1/bn/beta:0, group2/block1/conv2/W:0, tp/u2/dense/blk/0/preact_bna/bn/gamma:0, group2/block3/conv2/bn/gamma:0, group2/block4/conv2/bn/variance/EMA:0, tp/u2/dense/blk/0/conv1/bn/me
an/EMA:0, hv/u3/dense/blk/2/conv1/bn/variance/EMA:0, hv/u3/dense/blk/0/conv1/bn/gamma:0, group1/block3/conv1/bn/beta:0, hv/u2/dense/blk/0/conv1/W:0, hv/u3/dense/blk/4/preact_bna/bn/gamma:0, np/u3/dense/bl
k/5/preact_bna/bn/mean/EMA:0, group2/block1/conv1/bn/variance/EMA:0, hv/u2/dense/blk/0/conv1/bn/mean/EMA:0, np/u2/dense/blk/0/conv1/bn/beta:0, group2/block5/conv2/bn/variance/EMA:0, tp/u2/dense/blk/1/conv1/W:0, np/u2/dense/blk/1/conv1/W:0, np/u3/dense/blk/1/preact_bna/bn/variance/EMA:0, group1/block1/conv2/bn/beta:0, tp/u3/dense/blk/1/preact_bna/bn/mean/EMA:0, tp/u3/dense/blk/7/conv2/W:0, np/u3/dense/blk/6/conv2/W:0, group0/block1/conv1/W:0, hv/u2/dense/blk/1/preact_bna/bn/variance/EMA:0, group1/block2/conv1/bn/mean/EMA:0, hv/u3/dense/blk/6/preact_bna/bn/gamma:0, tp/u3/dense/blk/0/preact_bna/bn/gamma:0, np/u3/dense/blk/1/conv1/bn/mean/EMA:0, np/u3/dense/blk_bna/bn/mean/EMA:0, tp/u3/dense/blk/2/preact_bna/bn/gamma:0, hv/u3/dense/blk/6/preact_bna/bn/variance/EMA:0, hv/u2/dense/blk/3/conv1/bn/variance/EMA:0, group2/block1/preact/bn/beta:0, tp/u3/dense/blk/2/preact_bna/bn/mean/EMA:0, np/u3/dense/blk/7/preact_bna/bn/gamma:0, hv/u3/dense/blk/2/conv1/bn/mean/EMA:0, group1/block1/preact/bn/variance/EMA:0, hv/u3/dense/blk/6/conv2/W:0, np/u2/dense/blk/3/preact_bna/bn/gamma:0, np/u3/dense/blk/3/conv1/W:0, np/u3/dense/blk_bna/bn/beta:0, conv_out_hv/W:0, np/u3/dense/blk/6/preact_bna/bn/beta:0, np/u3/dense/blk/4/preact_bna/bn/mean/EMA:0, group0/block0/conv2/bn/mean/EMA:0, preact_out_np/bn/mean/EMA:0, tp/u3/dense/blk/3/preact_bna/bn/mean/EMA:0, tp/u2/dense/blk/3/preact_bna/bn/gamma:0, hv/u2/dense/blk/3/conv1/W:0, hv/u3/dense/blk/2/preact_bna/bn/mean/EMA:0, np/u3/dense/blk/0/preact_bna/bn/variance/EMA:0, hv/u3/dense/blk/4/preact_bna/bn/variance/EMA:0, group1/bnlast/bn/beta:0, group3/block2/preact/bn/variance/EMA:0, np/u3/dense/blk/0/preact_bna/bn/beta:0, np/u2/dense/blk/1/conv2/W:0, group1/block3/conv3/W:0, group1/block3/preact/bn/beta:0, tp/u2/dense/blk/1/conv1/bn/beta:0, group0/block2/conv2/W:0, hv/u3/dense/blk/1/conv1/bn/gamma:0, group0/block1/conv2/W:0, np/u2/dense/blk_bna/bn/beta:0, group3/block1/preact/bn/mean/EMA:0, np/u3/dense/blk/4/conv1/bn/beta:0, group2/block2/preact/bn/beta:0, tp/u3/dense/blk/1/preact_bna/bn/variance/EMA:0, np/u3/dense/blk/2/conv1/bn/gamma:0, group1/block2/conv2/bn/beta:0, group2/block5/conv3/W:0, hv/u2/dense/blk/3/preact_bna/bn/mean/EMA:0, tp/u2/dense/blk/1/preact_bna/bn/variance/EMA:0, group1/block2/conv1/bn/beta:0, preact_out_hv/bn/variance/EMA:0, tp/u3/dense/blk/7/conv1/bn/beta:0, group3/block0/conv1/bn/mean/EMA:0, group2/block4/conv1/bn/gamma:0, group0/block0/conv1/bn/beta:0, hv/u3/dense/blk/5/conv1/bn/variance/EMA:0, group1/block2/conv2/bn/variance/EMA:0, group1/block2/preact/bn/mean/EMA:0, group3/block1/conv2/bn/variance/EMA:0, group3/block0/conv1/W:0, group0/block2/conv1/bn/beta:0, np
/u3/dense/blk/0/conv1/W:0, group3/block0/conv2/bn/beta:0, group3/bnlast/bn/gamma:0, np/u3/dense/blk/4/conv2/W:0, hv/u3/dense/blk/3/conv1/bn/gamma:0, group0/block1/conv2/bn/gamma:0, group1/block3/conv2/bn/
gamma:0, tp/u3/dense/blk/1/preact_bna/bn/beta:0, group0/block2/conv1/bn/mean/EMA:0, group3/block1/conv1/bn/beta:0, group2/block1/preact/bn/gamma:0, np/u2/dense/blk/3/preact_bna/bn/beta:0, tp/u3/dense/blk/
5/preact_bna/bn/variance/EMA:0, np/u2/dense/blk/2/preact_bna/bn/beta:0, group2/block1/conv3/W:0, group1/block0/conv2/W:0, np/u3/dense/blk/4/conv1/bn/mean/EMA:0, hv/u2/dense/blk_bna/bn/beta:0, tp/u3/convf/
W:0, hv/u2/dense/blk/0/conv1/bn/variance/EMA:0, group1/block2/preact/bn/beta:0, tp/u3/dense/blk/6/conv1/W:0, np/u2/dense/blk_bna/bn/gamma:0, tp/u2/dense/blk/3/preact_bna/bn/variance/EMA:0, hv/u3/dense/blk
/7/preact_bna/bn/beta:0, tp/u3/dense/blk/0/conv1/W:0, group0/block2/conv1/bn/variance/EMA:0, tp/u2/dense/blk/2/conv1/bn/variance/EMA:0, tp/u2/dense/blk/0/conv1/bn/variance/EMA:0, tp/u3/dense/blk/6/preact_
bna/bn/gamma:0, tp/u2/dense/blk_bna/bn/variance/EMA:0, tp/u3/dense/blk/7/conv1/bn/mean/EMA:0, group2/block2/conv1/bn/variance/EMA:0, hv/u2/dense/blk/2/conv1/bn/beta:0, hv/u3/dense/blk/4/conv1/W:0, np/u3/d
ense/blk/2/preact_bna/bn/variance/EMA:0, group2/block5/conv2/bn/gamma:0, hv/u3/dense/blk/7/preact_bna/bn/variance/EMA:0, group1/block0/conv2/bn/gamma:0, tp/u2/dense/blk/2/conv1/bn/gamma:0, conv_bot/W:0, g
roup0/block1/conv2/bn/beta:0, tp/u3/dense/blk_bna/bn/beta:0, group3/block0/conv2/bn/variance/EMA:0, group2/block2/conv1/bn/beta:0, group2/block3/conv2/bn/mean/EMA:0, hv/u3/dense/blk_bna/bn/beta:0, tp/u3/d
ense/blk/2/conv1/bn/mean/EMA:0, group1/block3/conv1/bn/variance/EMA:0, group2/block4/preact/bn/mean/EMA:0, tp/u3/dense/blk/1/conv1/bn/beta:0, group2/block5/conv1/W:0, np/u2/dense/blk/3/conv1/bn/gamma:0, t
p/u2/dense/blk/1/preact_bna/bn/beta:0, np/u1/conva/W:0, tp/u2/dense/blk/0/conv2/W:0, group1/block3/conv2/bn/variance/EMA:0, group0/block1/preact/bn/mean/EMA:0, group1/block0/conv1/bn/mean/EMA:0, group1/bl
ock0/conv1/bn/beta:0, hv/u2/dense/blk_bna/bn/gamma:0, hv/u2/dense/blk/3/preact_bna/bn/gamma:0, np/u2/dense/blk/3/conv1/bn/beta:0, group0/block1/preact/bn/gamma:0, hv/u3/dense/blk/1/preact_bna/bn/mean/EMA:
0, hv/u3/dense/blk/7/conv1/bn/beta:0, hv/u3/dense/blk/4/conv1/bn/mean/EMA:0, hv/u3/dense/blk/1/conv1/bn/mean/EMA:0, np/u3/dense/blk/1/conv1/bn/variance/EMA:0, group2/block2/preact/bn/gamma:0, np/u2/dense/
blk/0/conv1/bn/variance/EMA:0, hv/u3/dense/blk/4/preact_bna/bn/beta:0, group2/block1/conv2/bn/variance/EMA:0, hv/u2/dense/blk/3/conv1/bn/beta:0, group0/block2/conv2/bn/gamma:0, tp/u3/dense/blk/1/conv1/bn/
mean/EMA:0, group2/block3/conv1/bn/gamma:0, group2/bnlast/bn/gamma:0, tp/u3/dense/blk/0/conv1/bn/mean/EMA:0, np/u3/dense/blk_bna/bn/gamma:0, group2/block0/conv2/W:0, tp/u3/dense/blk/3/conv2/W:0, np/u3/den
se/blk/3/preact_bna/bn/variance/EMA:0, group1/block0/conv3/W:0, conv_out_hv/b:0, group3/block0/conv1/bn/gamma:0, group2/block5/conv2/bn/mean/EMA:0, hv/u3/dense/blk/6/conv1/bn/beta:0, group2/block4/conv2/W
:0, group1/block3/conv2/bn/mean/EMA:0, hv/u3/dense/blk/1/preact_bna/bn/gamma:0, hv/u3/dense/blk/5/preact_bna/bn/beta:0, group3/block2/conv1/bn/gamma:0, group2/block3/conv2/bn/beta:0, group2/block3/conv3/W
:0, np/u2/dense/blk/0/conv2/W:0, tp/u3/dense/blk/7/preact_bna/bn/mean/EMA:0, np/u3/dense/blk/1/preact_bna/bn/mean/EMA:0, np/u3/dense/blk/5/conv1/bn/variance/EMA:0, hv/u2/dense/blk/2/conv1/bn/variance/EMA:
0, np/u3/dense/blk/3/conv1/bn/beta:0, tp/u3/dense/blk/3/conv1/bn/gamma:0, hv/u2/dense/blk/2/preact_bna/bn/gamma:0, group2/block1/conv2/bn/gamma:0, tp/u3/dense/blk/7/preact_bna/bn/beta:0, group2/block3/pre
act/bn/gamma:0, group0/block0/conv2/bn/beta:0, tp/u3/dense/blk/1/conv1/bn/gamma:0, hv/u2/dense/blk/3/conv2/W:0, hv/u2/dense/blk/1/preact_bna/bn/beta:0, group2/block0/convshortcut/W:0, tp/u1/conva/W:0, gro
up3/block0/conv2/bn/gamma:0, np/u2/dense/blk/2/preact_bna/bn/variance/EMA:0, tp/u3/dense/blk_bna/bn/gamma:0, group2/block3/conv1/bn/variance/EMA:0, hv/u3/dense/blk/0/preact_bna/bn/gamma:0, group3/block2/c
onv2/bn/gamma:0, hv/u2/dense/blk/3/preact_bna/bn/beta:0, np/u3/dense/blk/7/conv2/W:0, group2/block0/conv2/bn/gamma:0, conv_out_tp/W:0, group2/block4/conv1/bn/variance/EMA:0, tp/u2/dense/blk/2/conv1/bn/mea
n/EMA:0, hv/u2/dense/blk/1/conv1/bn/mean/EMA:0, group2/block2/conv2/bn/beta:0, np/u2/dense/blk/0/preact_bna/bn/mean/EMA:0, group0/bnlast/bn/beta:0, group3/block2/conv1/bn/mean/EMA:0, hv/u2/dense/blk/1/pre
act_bna/bn/gamma:0, tp/u3/dense/blk/3/preact_bna/bn/variance/EMA:0, tp/u3/dense/blk/4/conv1/bn/gamma:0, np/u3/dense/blk/7/conv1/bn/mean/EMA:0, np/u3/dense/blk/5/conv2/W:0, group3/block0/conv1/bn/beta:0, g
roup2/block5/preact/bn/beta:0, hv/u3/dense/blk/3/preact_bna/bn/beta:0, tp/u3/dense/blk/7/conv1/W:0, tp/u3/dense/blk/6/preact_bna/bn/mean/EMA:0, tp/u2/dense/blk/3/preact_bna/bn/beta:0, group1/block3/conv1/
bn/mean/EMA:0, group0/bnlast/bn/mean/EMA:0, group0/block0/conv1/W:0, hv/u3/dense/blk/2/preact_bna/bn/gamma:0, group1/block0/conv1/W:0, np/u3/dense/blk/7/preact_bna/bn/beta:0, np/u2/dense/blk/1/preact_bna/
bn/mean/EMA:0, np/u3/dense/blk/2/preact_bna/bn/mean/EMA:0, conv_out_tp/b:0, hv/u2/dense/blk_bna/bn/variance/EMA:0, group3/bnlast/bn/mean/EMA:0, hv/u2/dense/blk/2/conv1/bn/gamma:0, group1/block1/preact/bn/
gamma:0, group0/block2/preact/bn/variance/EMA:0, group2/block4/preact/bn/variance/EMA:0, np/u2/dense/blk/0/preact_bna/bn/beta:0, group2/block4/conv2/bn/gamma:0, np/u3/dense/blk/3/conv1/bn/variance/EMA:0, 
np/u3/dense/blk/5/conv1/bn/beta:0, tp/u3/dense/blk_bna/bn/mean/EMA:0, hv/u3/dense/blk/7/conv1/bn/mean/EMA:0, group3/block1/conv2/bn/gamma:0, tp/u3/dense/blk/6/preact_bna/bn/beta:0, group3/block2/conv1/bn/
variance/EMA:0, tp/u3/dense/blk/7/preact_bna/bn/gamma:0, tp/u3/dense/blk/5/conv1/bn/variance/EMA:0, tp/u2/dense/blk_bna/bn/gamma:0, hv/u2/dense/blk/2/conv2/W:0, group1/block3/conv2/W:0, tp/u3/dense/blk/7/
conv1/bn/variance/EMA:0, tp/u3/dense/blk/0/preact_bna/bn/mean/EMA:0, group2/block5/conv2/W:0, hv/u3/dense/blk/5/conv1/W:0, hv/u2/dense/blk/0/preact_bna/bn/beta:0, hv/u2/dense/blk/0/preact_bna/bn/gamma:0, 
hv/u3/dense/blk/5/conv1/bn/gamma:0, np/u2/dense/blk/1/preact_bna/bn/variance/EMA:0, tp/u3/dense/blk/0/conv2/W:0
[0406 23:07:07 @sessinit.py:220] Restoring from dict ...

FYI on `__pycache__`

@simongraham @vqdang just wanted to point (you might be already aware), but general practice is to not include __pychache__ files on a repository (https://github.com/simongraham/hover_net_inf/tree/master/src/__pycache__), these are machine specific and will be recreated on another machine anyway.

There was no exclusion for *.pyc files, since there was no .gitignore file in the first place. I have added these.

Generally, if the __pycache__ files are already tracked you can delete them and then remove them from tracking them on git using git rm -r --cache */__pycache__/*.

I have fixed that on the test_requirements branch.

Question about instances.npy files

Hi,
sorry for my question, maybe the answer is obvious.
I used hovernet on slide images and I work on the instances.npy files. I have numpy ndarray (512*512) and I expected values only between 0 and 5 (the predicted class for the pixel) but I'm very surprise to see other values such as 17, 147, 300. etc which are different for each patch (I mean patch 1 has ~10 "unexpected" values, patch 2 ~10 others, etc).
Please, can you explain me what I missed ?
Thank you for your help

Weight size mismatch

Hi,

Thank you for your very impressive work.
However, I can't run an inference, I probably make a mistake but can't find where.
Below an exemple of my command line and the result I get:
"
python run.py --gpu='0' --mode= tile --model=C:/Users/rocha/HVN_infer/models/pannuke.npz --input_dir= C:/Users/rocha/HVN_depart --output_dir=C:/Users/rocha/HVN_arrivee
2021-05-17 21:12:24.939438: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\tfutils\common.py:151: The name tf.VERSION is deprecated. Please use tf.version.VERSION instead.

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\tfutils\common.py:151: The name tf.VERSION is deprecated. Please use tf.version.VERSION instead.

WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\callbacks\graph.py:81: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\callbacks\graph.py:81: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\callbacks\hooks.py:13: The name tf.train.SessionRunHook is deprecated. Please use tf.estimator.SessionRunHook instead.

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\callbacks\hooks.py:13: The name tf.train.SessionRunHook is deprecated. Please use tf.estimator.SessionRunHook instead.

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\tfutils\optimizer.py:16: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\tfutils\optimizer.py:16: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\tfutils\sesscreate.py:20: The name tf.train.SessionCreator is deprecated. Please use tf.compat.v1.train.SessionCreator instead.

WARNING:tensorflow:From C:\Users\rocha\Anaconda3\envs\HVN_infer\lib\site-packages\tensorpack\tfutils\sesscreate.py:20: The name tf.train.SessionCreator is deprecated. Please use tf.compat.v1.train.SessionCreator instead.

Usage:
run.py [--gpu=] [--mode=] [--model=] [--input_dir=] [--output_dir=] [--cache_dir=] [--batch_size=] [--inf_tile_shape=] [--proc_tile_shape=] [--postproc_workers=] [--return_probs]
run.py (-h | --help)
run.py --version
"
Is it an install problem or a command line error.
Thks for your help

Phil

CPU compatibility?

Hi.
I have no access to a GPU and just want to infer a small ROI. I've used tensorflow instead of tensorflow-gpu. However, I'm having an issue with data format in the conv layer. The CPU can only handle NHWC and the data format in the network seems to be NCHW. Yet my mini_batch and sub-patches are in NHWC format. Is this model compatible with a CPU? I'm getting the following error:

tensorflow.python.framework.errors_impl.UnimplementedError: Generic conv implemetation only supports NHWC tensor format for now.
[[{{node conv0/Conv2D}} = Conv2D[T=DT_FLOAT, data_foramt="NCHW", dilation=[1,1,1,1], padding="SAME", strides=[1,1,1,1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](truediv, conv0/W/read)]]

Thanks

A request for code

    Hello! It is my great honor to read your paper "One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification", and I have a great interest in this research. Could you please send the code to me for research or release it on GitHub? I tried to contact you through the email address in the paper, but the message couldn't be delivered for the reason that simon.graham wasn't found at warwick.ac.uk..
    Wish you all the best in your life and work.

Normalization reference

Hi
i am trying to use this model to inference my own image. Would you please provide the reference image to normalizing my data, thank you very much

Expected image resolution

Hello, I had a small question regarding the expected input for the model.

What is the expected resolution for the images, in terms of micrometer/pixel.

Kindest wishes,
Sebastian

setuptools error: ImportError: cannot import name 'Feature'

Since March 8 2020, Feature support has been removed from the setuptools package after a long period of depreciation, see #1979 for details. I suggest to force the installation of the last previous version of setuptools supporting Feature, namely 45.3.0 (see on PyPI) in the requirements.txt file.

JSON files contain nuclei from all processed slides

Hi,
I'm using HoVerNet again on a set of WSI's. The processing seems to proceed as expected in reasonable time and memory requirements, so that's good. But I noticed the output has changed from the previously used .mat format to a .json dict. These nuclei_dict.json files seem to not reset between slides, resulting in successive instances appending to instances from previous slides, and files growing with each successive slide processed. Besides this being unexpected behavior, one consequence is a surface-level difficulty in extracting the predicted nuclei corresponding to any individual slide, as this would require (I think?) reading each previously processed slide to track the correct offset.

I suspect that something like a

self.wsi_inst_info = {}

inside the for loop of InferWSI.process_all_files() would do the trick... I'm going to test that and report back in a few minutes.

Reporting back, yes inserting that dictionary reset line right here

hovernet_inference/run.py

Lines 818 to 819 in b9c4fe6

for filename in self.file_list:
filename = os.path.basename(filename)
seems to have done it.

Images in usage.ipynb

I could not find the images used in usage.ipynb in the MoNuSAC dataset from the link you provided. Are the images used in the notebook renamed or from another source?

ValueError"The given initializer function expects the following args ['self', 'shape', 'dtype', 'partition_info']" about run command

Hi, I am trying to run updated hovernet_inference using the command
python run.py --gpu='0' --mode='tile' --model='hovernet.npz' --input_dir='input_dir' --output_dir', but I faced the error like this ValueError: You can only pass an initializer function that expects no arguments to its callable when the shape is not fully defined. The given initializer function expects the following args ['self', 'shape', 'dtype', 'partition_info'].

can you help to figure out what is wrong?

Multi GPU

Is there support for data parallel inference on multiple ROI's at once when specifying more than 1 GPU? When we specify multiple GPU's it seems that only the first is utilized.

How to convert from NCHW to NHWC data format

Hi @simongraham,

Thanks for writing such a useful inference script. I tried to run run.py on a single 1000x1000 .tif image and encountered an error relating to data format. As I'm relatively new here, could you please advise how this could be solved and if suggestion below could be of any help? Thank you very much for your time.

Command:

 python run.py --gpu='0' --mode='tile' --model='monusac.npz' --input_dir='tile_dir' --output_dir='output' --inf_tile_shape=1000 --proc_tile_shape=1000

Output:

{'--batch_size': '25',
 '--cache_dir': 'cache/',
 '--gpu': '0',
 '--help': False,
 '--inf_tile_shape': '1000',
 '--input_dir': 'tile_dir',
 '--mode': 'tile',
 '--model': 'monusac.npz',
 '--output_dir': 'output',
 '--postproc_workers': '10',
 '--proc_tile_shape': '1000',
 '--return_probs': False,
 '--version': False}
Loading Model...
2023-02-05 20:00:45.526199: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Processing Images:   0%|                                  | 0/1 [00:00<?, ?it/s]shape:  (1000, 1000, 3)
Traceback (most recent call last):
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
    return fn(*args)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.UnimplementedError: Generic conv implementation only supports NHWC tensor format for now.
         [[{{node conv0/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](truediv, conv0/W/read)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "run.py", line 860, in <module>
    infer.process_all_files()
  File "run.py", line 241, in process_all_files
    pred_map = self.__gen_prediction(img, self.predictor)
  File "run.py", line 164, in __gen_prediction
    batch_output = predictor(mini_batch)[0]
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\predict\base.py", line 39, in __call__
    output = self._do_call(dp)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\predict\base.py", line 131, in _do_call
    return self._callable(*dp)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\client\session.py", line 1204, in _generic_run
    return self.run(fetches, feed_dict=feed_dict, **kwargs)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
    run_metadata_ptr)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
    run_metadata)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnimplementedError: Generic conv implementation only supports NHWC tensor format for now.
         [[node conv0/Conv2D (defined at C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\models\conv2d.py:68)  = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](truediv, conv0/W/read)]]

Caused by op 'conv0/Conv2D', defined at:
  File "run.py", line 859, in <module>
    infer.load_model()
  File "run.py", line 208, in load_model
    self.predictor = OfflinePredictor(pred_config)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\predict\base.py", line 148, in __init__
    config.tower_func(*input.get_input_tensors())
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\tfutils\tower.py", line 284, in __call__
    output = self._tower_fn(*args)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\graph_builder\model_desc.py", line 162, in build_graph
    return self._build_graph(args)
  File "C:\Users\jlwq0\Documents\hovernet_inference\hover\model\graph.py", line 128, in _build_graph
    d = encoder(i)
  File "C:\Users\jlwq0\Documents\hovernet_inference\hover\model\graph.py", line 57, in encoder
    d1 = Conv2D('conv0',  i, 64, 7, padding='same', strides=1, activation=BNReLU)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\models\registry.py", line 124, in wrapped_func
    outputs = func(*args, **actual_args)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\models\tflayer.py", line 66, in decorated_func
    return func(inputs, **ret)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\models\conv2d.py", line 68, in Conv2D
    ret = layer.apply(inputs, scope=tf.get_variable_scope())
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 817, in apply
    return self.__call__(inputs, *args, **kwargs)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\layers\base.py", line 374, in __call__
    outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 757, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 194, in call
    outputs = self._convolution_op(inputs, self.kernel)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 868, in __call__
    return self.conv_op(inp, filter)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 520, in __call__
    return self.call(inp, filter)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 204, in __call__
    name=self.name)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1043, in conv2d
    data_format=data_format, dilations=dilations, name=name)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
    op_def=op_def)
  File "C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
    self._traceback = tf_stack.extract_stack()

UnimplementedError (see above for traceback): Generic conv implementation only supports NHWC tensor format for now.
         [[node conv0/Conv2D (defined at C:\Users\jlwq0\anaconda3\envs\hovernet_simon\lib\site-packages\tensorpack\models\conv2d.py:68)  = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](truediv, conv0/W/read)]]

Suggestion:
tensorflow/tensorflow#15364 (comment)

Improve README

Add instructions on environment/docker and add images/gifs

Inference network crashes with post-processing

Hi,

I was trying your model out on my own dataset, unfortunately, I ran into a couple of issues. I hope you guys can point me in the right direction:

  • Python crashes with a broken pipe as soon as the network starts post-processing.
  • The network is very slow, it took 12 hours before I got to the post-processing step, is there a way to increase the processing speed (I saw the GPU being only utilized ones every X minutes, but the writing/processing of invalid patches takes a long time). Is it required to write all non-valid patches to memory?
  • The mem-files are extremely large, 1 file (of 1 slide) uses more than 400GB. Not sure why though?
  • With the new versions MRXS is not supported by default, is there a specific reason why MRXS is removed?

Thanks in advance,
Kind regards.

Infernence does not run

https://github.com/simongraham/hover_net_inf/blob/5fe9266bd0506a029c391f873d42a1752a305275/src/config.py#L51

As per test_requirements branch, inference code fails. It fails to load the model it seems, full error below

(hover) jevjev@jevjev:~/Dropbox/Tia/Hover-net-inference/src$ python infer.py --gpu=0 --mode="roi"
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-04-06 19:29:56.634572: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2020-04-06 19:29:56.811605: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6705
pciBusID: 0000:17:00.0
totalMemory: 10.92GiB freeMemory: 10.76GiB
2020-04-06 19:29:56.811640: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2020-04-06 19:29:57.032537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-06 19:29:57.032577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2020-04-06 19:29:57.032583: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2020-04-06 19:29:57.032762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/device:GPU:0 with 10405 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:17:00.0, compute capability: 6.1)
[0406 19:29:57 @sessinit.py:294] Loading dictionary from /home/jevjev/hovernet.npz ...
Traceback (most recent call last):
  File "infer.py", line 495, in <module>
    infer.run() 
  File "infer.py", line 109, in run
    output_names = self.eval_inf_output_tensor_names)
  File "/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorpack/predict/config.py", line 79, in __init__
    self.input_signature = model.get_input_signature()
  File "/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorpack/utils/argtools.py", line 192, in wrapper
    value = func(*args, **kwargs)
  File "/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorpack/train/model_desc.py", line 37, in get_input_signature
    inputs = self.inputs()
  File "/home/jevjev/anaconda3/envs/hover/lib/python3.6/site-packages/tensorpack/train/model_desc.py", line 67, in inputs
    raise NotImplementedError()
NotImplementedError

The error seems to be coming from tensorpack although as you will see in the requirements file, I've got the latest version specified.

Inference on large images of size more than 20000

Hi,
I am trying to predict the WSI images which are in png format, it is having size as WSI images i.e. 20000, 30000. I am using tile mode for inference. Is tile mode for patches and wsi for complete. GPU crashing after 30 min on Google Colab.

Can you please let me know the correct way to predict WSI size images in PNG? I am predicting with 5000 as tile size and 2048 as posc size.

Sample image : https://drive.google.com/file/d/1-Rsu2XXZn-uiym4dbSThHa9K-u5UPcQu/view?usp=sharing

Showing the sample below how its stuck

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.