Giter Club home page Giter Club logo

convnetcelldetection's People

Contributors

alexanderriordan avatar noahapthorpe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

convnetcelldetection's Issues

Questions about input/output dimensions

I am working on a calcium-imaging project very similar to the one you presented in your paper at NIPS last year. First of all, I appreciate your work and your effort in open-sourcing and documenting the code so well. I was able to get it running on AWS and on my own machine very easily.

I'm confused about a few things in the implementation, and I hope you might be able to clear things up. It mostly has to do with dimensionality and structure. I'll go ahead and say that I've never used ZNN before this, but I have been looking at the docs for .spec and .znn files.

Specifically, I'm confused about the following:

  1. Dimensions of the input.

The paper says "Each output pixel depended on a 37 x 37 x T pixel field of view in the input.." and "The ConvNet was applied in a 37 x 37 x T window, sliding in two dimensions over the input image stack to produce an output pixel for every location of the window fully connected within the image bounds."

To me this sounds like the input layer should be 37 x 37 x T, but I can't find this defined anywhere in the .spec or .znn files. Is this indeed the input window size? Is it determined implicitly by ZNN?

  1. Dimensions of the output as they relate to the input.

The paper says "The (2+1)D network was trained with softmax loss and output patches of size 120 x 120."

In data/example/main_config.cfg, you define:
patch_size = 1,120,120, and
forward_outsz = 1,220,220.

The 120 dimension is mentioned in the paper, but I don't see anything about the 220 dimension. What is the difference between those two? How does 120 x 120 map back to the 37 x 37 input window? Does this relate in some way with there being two 120 x 120 images?โ€‹

  1. Why are there two 2D grayscale images given as output?

The paper says "This yields two 2D grayscale images as output, which together represent the softmax probability of each pixel being inside an ROI centroid.".

Can you explain how those two images are combined to compute the probability?

Thanks in advance for your help and effort in publishing this work!

how do a test?

hello, interesting at your repository. but how to test a simple image, And can you provide some images dataset and pre-trained model?
thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.