Giter Club home page Giter Club logo

adalbertocq / histomorphological-phenotype-learning Goto Github PK

View Code? Open in Web Editor NEW
39.0 39.0 8.0 84.45 MB

Corresponding code of 'Quiros A.C.+, Coudray N.+, Yeaton A., Yang X., Chiriboga L., Karimkhan A., Narula N., Pass H., Moreira A.L., Le Quesne J.*, Tsirigos A.*, and Yuan K.* Mapping the landscape of histomorphological cancer phenotypes using self-supervised learning on unlabeled, unannotated pathology slides. 2024'

Python 54.44% Jupyter Notebook 45.56%
clustering histopathology self-supervised-learning unsupervised-learning

histomorphological-phenotype-learning's People

Contributors

adalbertocq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

histomorphological-phenotype-learning's Issues

Tile vector representations question

Hello @AdalbertoCq,
My intention is to find tile vector representations for a given single svs file.
If I understand corrently, the real_hdf5 is the h5 file we get as output from DeepPath preprocess, and the checkpoint is weight we get from step 1 (I used your provided weights here).
When I try to run the script run_representationspathology_projection.py, it gives following error:

/models/selfsupervised/BarlowTwins.py", line 85, in __init__
self.num_samples = data.training.images.shape[0]
AttributeError: 'NoneType' object has no attribute 'images'

This error esstially happens because incorrect setting of dataset argument when I chase through the code if I'm right.

Can you explain more about what is the argument dataset of run_representationspathology_projection.py? Which dataset it is refering? And more importantly, why do I need another dataset during vectorising tiled images, given I already have the image h5 file (real_hdf5) and a pre trained model (checkpoint .ckt file).

Thank you in advance

Simpler trained model for inference?

Hi!
Thank you for your wonderful work!

I'm trying to load your trained networks, but I'm having some difficulties.
I managed to load the graph correctly, but I can't seem to find which tensors correspond to input/outputs... There are some gradients and Adam tensors and operations everywhere.

Would you think it would be possible for you to propose simpler weights/models if we are only looking to use it for inference?
Or, alternatively, a small script/notebook that shows how to load and use the model (Not inside your particular pipeline, but on its own.)

Thank you!

External cohort Pipeline Problem.

Hello,@AdalbertoCq
I am trying run through the pipeline of Mapping an external cohort to existing clusters. For simplicity I am using just one WSI image as the external cohort.
On Step 4 including metadata in h5, I notice a csv file contains "luad", "os_event_ind", "os_event_data" column. I cannot create my own csv file, Where are these data come from? I secrch around and find nothing.

Might a silly question :)

Thank you

Some questions about statistics

Hi, @AdalbertoCq ,

Thanks for sharing a wonderful work!

I ran into some statistical problems about Cox proportional hazard regression while reading your work.

You mentioned in the article that you implement a central logarithmic transformation after using multiplicative replacement for WSI vector representations to ensure independence between the covariates. But I noticed that after these transformations, the sum of the features in each row is almost equal to 0. Whether this is a hidden linear relationship and affects the calculation of Cox covariates, such as making the value of a covariate tend to infinity?

I would be grateful if you could answer these questions to me.

Thanks.

Magnification pre-trained models

Hi, very nice work!

I want to extract the feature vectors for some of my H&E data, on tiles directly at 20x magnification.
However, I have seen that the pre-trained models have been trained on a 5x downscaled version of the images.

Have you ever tested the feature encoding for images with no downscaling?
Is it going to cause problems?

Thank you very much in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.