noahapthorpe / convnetcelldetection Goto Github PK
View Code? Open in Web Editor NEWAutomatic cell detection in microscopy data using convolutional networks
Automatic cell detection in microscopy data using convolutional networks
I am working on a calcium-imaging project very similar to the one you presented in your paper at NIPS last year. First of all, I appreciate your work and your effort in open-sourcing and documenting the code so well. I was able to get it running on AWS and on my own machine very easily.
I'm confused about a few things in the implementation, and I hope you might be able to clear things up. It mostly has to do with dimensionality and structure. I'll go ahead and say that I've never used ZNN before this, but I have been looking at the docs for .spec and .znn files.
Specifically, I'm confused about the following:
The paper says "Each output pixel depended on a 37 x 37 x T pixel field of view in the input.." and "The ConvNet was applied in a 37 x 37 x T window, sliding in two dimensions over the input image stack to produce an output pixel for every location of the window fully connected within the image bounds."
To me this sounds like the input layer should be 37 x 37 x T, but I can't find this defined anywhere in the .spec or .znn files. Is this indeed the input window size? Is it determined implicitly by ZNN?
The paper says "The (2+1)D network was trained with softmax loss and output patches of size 120 x 120."
In data/example/main_config.cfg, you define:
patch_size = 1,120,120, and
forward_outsz = 1,220,220.
The 120 dimension is mentioned in the paper, but I don't see anything about the 220 dimension. What is the difference between those two? How does 120 x 120 map back to the 37 x 37 input window? Does this relate in some way with there being two 120 x 120 images?โ
The paper says "This yields two 2D grayscale images as output, which together represent the softmax probability of each pixel being inside an ROI centroid.".
Can you explain how those two images are combined to compute the probability?
Thanks in advance for your help and effort in publishing this work!
hello, interesting at your repository. but how to test a simple image, And can you provide some images dataset and pre-trained model?
thank you very much!
Doesn't work with oval imageJ ROIs. Not sure about others. I'm working on a fix for this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.