casywang / cuda-convnet Goto Github PK
View Code? Open in Web Editor NEWAutomatically exported from code.google.com/p/cuda-convnet
Automatically exported from code.google.com/p/cuda-convnet
It seems like Layer with weights class lacks a destructor. I wonder why it is
so? Won't it cause memory leaks (for example, biases are never deleted)?
Original issue reported on code.google.com by [email protected]
on 22 Jan 2013 at 8:02
What steps will reproduce the problem?
1. compiler error,
2.
3.
What is the expected output? What do you see instead?
when i compile usr ./build.sh will broke see this error
obj/x86_64/release/src/util.cu.o: In function `pyDictGetMatrix(_object*, char
const*)':
tmpxft_0000033f_00000000-3_util.cudafe1.cpp:(.text+0x27a): undefined reference
to `Matrix::Matrix(PyArrayObject const*)'
obj/x86_64/release/src/util.cu.o: In function `getMatrixV(_object*)':
tmpxft_0000033f_00000000-3_util.cudafe1.cpp:(.text+0x3fe): undefined reference
to `Matrix::Matrix(PyArrayObject const*)'
What version of the product are you using? On what operating system?
centos 6.3
gcc 4.4.6-4
cuda 5.0
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 5 Apr 2013 at 3:30
The network training is fine without adding any contrast normalization layer
(all types), but ones add the contrast normalization layers, after several
iterations the net gets nan values. I tried different values of the size,
scale and pow values, and tried to place the layer before and after pooling
layer.
Original issue reported on code.google.com by [email protected]
on 8 May 2013 at 7:44
Since the CUDA 5 I have had no success in compiling the cuda convnet code.
Are there any plans to make it compatible with CUDA 5?
Original issue reported on code.google.com by [email protected]
on 16 Jan 2013 at 1:01
Hi Alex,
Could you confirm that this slightly strange-looking line from filter_act.cu is
correct?
assert(paddingStart <= 0 && paddingStart + (numModules-1)*moduleStride +
filterSize >= imgSize);
In particular, why are you multiplying numModules (which the square of
numModulesX) by the stride and adding filterSize (which is not the square, but
the side-length).
If you're sure, I trust you, but if you could additionally lend some intuition
for why I'd appreciate it.
There is a comment in the code but I still don't get it:
// These routines don't handle the case when only part of the image is visited
in the convolution
Thanks,
- James
Original issue reported on code.google.com by [email protected]
on 10 Jan 2012 at 10:35
Add support for local non-convolutional layers.
Original issue reported on code.google.com by [email protected]
on 30 Jun 2011 at 12:33
What steps will reproduce the problem?
1. use a big image like 512 x 512
2. put lots of filters (like 64)
3. have lots of color channels (again 64?)
What is the expected output? What do you see instead?
I expect a big filtered image, but instead it crashes.
The blocks are defined such that blocks.y > (2^16) so CUDA refuses to launch
the kernel.
I'm not sure I understand how to set the number of modules when doing a normal
convolution, but it seems that an outer loop is required. The trouble with an
outer loop is that the data is arranged in such a way that it is impossible to
apply just a fraction of the filters, or to process just some of each image.
The data arrangement makes it natural to process just some of the image
channels... but the color channels don't come into the blocking structure.
Basically... can I use this kernel to perform big convolutions?
Original issue reported on code.google.com by [email protected]
on 7 Mar 2012 at 6:55
fprop(NVMatrixV&) does not remember forward activities, so backprop will fail.
Currently backprop relies on forward activities being provided by layers below.
Original issue reported on code.google.com by [email protected]
on 9 Jul 2011 at 6:51
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.