vlfeat / matconvnet Goto Github PK
View Code? Open in Web Editor NEWMatConvNet: CNNs for MATLAB
License: Other
MatConvNet: CNNs for MATLAB
License: Other
Hi,
I launched a training with my own images but i have an error launched in vl_nnsoftmaxloss.
The error is raised after this:
2.12 s (27.2 images/s) err 98.2 err5 93.9
training: epoch 01: processing batch 32 of 4403 ...
(sometimes raised before or after).
Here is the error:
Error in vl_nnsoftmaxloss (line 62)
t = Xmax + log(sum(ex,3)) - reshape(X(c_), [sz(1:2) 1 sz(4)]) ;
Error in vl_simplenn (line 163)
res(i+1).x = vl_nnsoftmaxloss(res(i).x, l.class) ;
Error in cnn_train (line 134)
res = vl_simplenn(net, im, one, res, ...
Error in cnn_test (line 80)
[net,info] = cnn_train(net, imdb, fn, opts.train, 'conserveMemory', true) ;
The error is raised in the X(c_).
X is of size (5x5x912x64), hence X has 1459200 elements. c_ is of size 1600. When I monitored the elements of c_, the max is always near 1459200, but always under until the error is raised. At this time, max(c_) = 1460550. Is it possible ?
Is there a way to avoid this case ?
Thank you for your help,
Luc
In cnn_cifarr.m:115 and cnn_mnist.m:70, the mean of the data should be taken over the training examples only. Currently, it is taken over all set, i.e. training, validation, and testing.
Hi,
I have tried to compile the matconvnet with latest cuda 6.5, but it shows the error message:
g++: error: /usr/local/MATLAB/R2014b/bin/glnxa64/libcudart.so.6.5: No such file or directory
I found the place where we have this is in a xml file:
/matconvnet/matlab/src/config/mex_CUDA_glnxa64.sh
/matconvnet/matlab/src/config/mex_CUDA_glnxa64.xml
So I assume we need to update these files first.
Then I need to ln -s the corresponding lib files into the matlab that path. But I think these should not be the best solution, is any way to solve this?
Hi!
In the MATLAB command, after I typed cnn_mnist, the error happens, the outputs are shown blow:
cnn_mnist
resuming by loading epoch 24
training: epoch 25: processing batch 1 of 600 ...Error using vl_nnconv
DATA and FILTERS are not both CPU or GPU arrays.
Error in vl_simplenn (line 153)
res(i+1).x = vl_nnconv(res(i).x, l.filters, l.biases, 'pad', l.pad, 'stride',
l.stride) ;
Error in cnn_train (line 138)
res = vl_simplenn(net, im, one, res, ...
Error in cnn_mnist (line 75)
[net,info] = cnn_train(net, imdb, @getBatch, ...
I have successfully compiled the MatconvNet on Linux by using the command 'make ENABLE_GPU=y ENABLE_IMREADJPEG=y ARCH=glnxa64 MATLABROOT = /usr/local/MATLAB/R2014a CUDAROOT = /usr/local/cuda'. And my gpu device information is show below:
CUDADevice with properties:
Name: 'Tesla C1060'
Index: 1
ComputeCapability: '1.3'
SupportsDouble: 1
DriverVersion: 6.5000
ToolkitVersion: 5.5000
MaxThreadsPerBlock: 512
MaxShmemPerBlock: 16384
MaxThreadBlockSize: [512 512 64]
MaxGridSize: [65535 65535 1]
SIMDWidth: 32
TotalMemory: 4.2948e+09
FreeMemory: 3.9581e+09
MultiprocessorCount: 30
ClockRateKHz: 1296000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 0
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
Can anyone help me deal with this problem? Thank you so so much!
For extremely inbalanced datasets (i.e. needle in a haystack problem), how can I incorporate weights for examples? E.g., in the objective, I want to weigh positive examples, which are very rare, higher than the negative examples. What's the best way of doing this?
I saw that Matconvnet allow me to choose errortype as 'binary'. Is that the only parameter tht I need to change if I want to do binary classification (my label is 1 or 2).
I'm having trouble building from the command line with the Makefile on OS X 10.9, using Xcode 6.1 and Matlab R2014b (see below). So I decided to try to follow the Makefile and manually build the non-GPU version using calls to "mex" from within Matlab. In doing so, I realized (unless I'm misunderstanding) that the Makefile seems to reference vl_nnconv.cpp, vl_nnpool.cpp, and vl_nnnormalize.cpp. But I don't see any of those files in the repo, in matlab/src. I only see their .cu versions, but again, I'm hoping to build the non-CUDA version initially. Is that an error?
For the record, this is the error I get using the Makefile at the command line.
> make ARCH=maci64 MATLABROOT=/Applications/MATLAB_R2014b.app
/Applications/MATLAB_R2014b.app/bin/mex -c -largeArrayDims -lmwblas "matlab/src/bits/im2col.cpp"
Building with 'Xcode Clang++'.
xcodebuild: error: SDK "macosx1:10.9" cannot be located.
xcrun: error: unable to find utility "clang++", not a developer tool or in PATH
make: *** [matlab/src/bits/im2col.o] Error 255
Update: I was able to build using the Xcode project file with the no-GPU option.
Does anybody have an experience of error when executing the:
wait(gpuDevice)
When the code is waiting after finish the back propagation of my self-defined convolution layer, it gives the error:
The CUDA Error: unspecified launch failure
Error in parallel.gpu.CUDADevice/wait
Error in back_prop:
wait(gpuDevice);
I feel this should not be a memory problem. Besides, when I define only one filter (HxWxDx1) in the self-defined convolution layer, it's fine. But when I define two, which makes the filters HxWxDx2.
Does anyone have a clue about this?
Thanks!
nvcc fatal : nvcc cannot find a supported version of Microsoft Visual Studio. Only the versions 2010, 2012, and 2013 are supported .
But I have vs2010 and vs2012 in my computer.
Hi, I hit an error when I want to pass my image through net
on GPU (Tesla k20c / Ubuntu 1204/Matlab 2014a).
here's the error
im2col: CUDA kernel error (unspecified launch failure)
Error using gpuArray/max
An unexpected error occurred during CUDA execution. The CUDA error was:
unspecified launch failure
Error in vl_nnrelu (line 17)
y = max(x, single(0)) ;
Error in vl_simplenn (line 165)
res(i+1).x = vl_nnrelu(res(i).x) ;
...
Both net
and input image were moved to gpu. Thanks for help!
Can matconvnet concatenates two parallel layer outputs like CONCAT layer in CAFFE? Suppose I have two images as input and to be convoluted at same time followed by a full connect layer to compare these two images. Can I build such a network with matconvnet? Thanks.
Hi!
I cannot get rid of an error when I was compiling the GPU version as instructed on the home page.
(First complied CPU version, all ok. Then I 'make clean' and 'distclean' so that I can make again.) I was running on a Macbook Pro Retina with newly installed OS 10.9.5 and Matlab 2014b.
what I did was:
make ENABLE_GPU=y ARCH=maci64 MATLABROOT=/Applications/MATLAB_R2014b.app CUDAROOT=/Developer/NVIDIA/CUDA-6.5
then the error was:
...
echo matlab/mex/vl_nnconv.mexmaci64
matlab/mex/vl_nnconv.mexmaci64
MW_NVCC_PATH='/Developer/NVIDIA/CUDA-6.5/bin/nvcc' /Applications/MATLAB_R2014b.app/bin/mex \
-output "matlab/mex/vl_nnconv.mexmaci64" \
"matlab/src/vl_nnconv.cu" matlab/src/bits/im2col.o matlab/src/bits/pooling.o matlab/src/bits/normalize.o matlab/src/bits/subsample.o matlab/src/bits/im2col_gpu.o matlab/src/bits/pooling_gpu.o matlab/src/bits/normalize_gpu.o matlab/src/bits/subsample_gpu.o \
-DENABLE_GPU -f matlab/src/config/mex_CUDA_maci64.xml -largeArrayDims -lmwblas -L/Developer/NVIDIA/CUDA-6.5/lib -lcublas -lcudart \
2> >( sed 's/^\(.*\)(\([0-9][0-9]*\)): \([ew].*\)/\1:\2: \3/g' >&2 )
No supported compiler or SDK was found. For options, visit http://www.mathworks.com/support/compilers/R2014b/maci64.html.
make: *** [matlab/mex/vl_nnconv.mexmaci64] Error 255
rm matlab/src/bits/pooling_gpu.o matlab/src/bits/im2col.o
...
I did installed Xcode and Xcode command line, gfortran.
Does anyone has any idea to help? Big thanks in advance.
My computer is setup with matlab 2012b, gcc4.4.6 and cuda5.5. When I modified the makefile as the instruction and then make, I found there is a error appears that: /usr/local/MATLAB/R2012b/bin/glnxa64/libscudart.so.5.0 not find.Then I modified the mex_CUDA_glnx64.sh. I modified the libscudart.so.5.0 to libscudart.so.5.5. I make a soft link as /usr/local/MATLAB/R2012b/bin/glnxa64/libscudart.so.5.5.Then another error said:"matlab/src/vlconv.cu":file not recognized:file format not recognized. My nvcc is right set up and I successfully compile the caffe project. Can anybody help me solve this problem?Thanks a lot.
Hi,
I compiled matconv with Xcode 5.1.1, Matlab R2014a and Cuda 5.5 (or 6.5, I tried both) and it said "Built Successfully". Some mex filed appeared :
vl_imreadjpeg.mexmaci64
vl_nnconv.mexmaci64
vl_nnnormalize.mexmaci64
vl_nnpool.mexmaci64.
But when I run your example (after the vl_setupnn) I have this error in matlab prompt:
"Attempt to execute SCRIPT vl_nnconv as a function:
/path_to_matconv/matconvnet-master/matlab/vl_nnconv.m
Error in vl_test_nnlayers (line 99)
y = vl_nnconv(x,[],b,'verbose') ;".
I am not sure of where the problem can be.
Thank you,
Luc
Hi,
I have compile MatconvNet successfully on Linux by using the command 'make ENABLE_GPU=y ENABLE_IMREADJPEG=y ARCH=glnxa64 MATLABROOT=/eecs/local/pkg/matlab-r2014a CUDAROOT=/eecs/local/pkg/cuda-6.5.14'. However, when I installed MatconvNet in matlab and tried to run vl_test_nnlayers, the error happened, and the outputs are shown below:
test number 1
testing vl_nnsoftamxloss multiple images convolutional
test number 2
testing vl_nnloss multiple images convolutional
testing vl_nnloss multiple images
testing vl_nnloss
test number 3
testing vl_nnsoftmax
test number 4
testing vl_nnconv with fully connected identity filters
Attempt to execute SCRIPT vl_nnconv as a function:
/eecs/research/asr/hengyue/matconvnet-1.0-beta7/matconvnet-1.0-beta7/matlab/vl_nnconv.m
Error in vl_test_nnlayers (line 110)
y = vl_nnconv(x,[],b,'verbose') ;
Seems like there are some problem about the script vl_nnconv.m. Is anyone have some hint to deal with this problem? Thank you:-)
What does this mean?
imdb.images.set = [ones(1,numel(y1)) 3*ones(1,numel(y2))] ;
imdb.meta.sets = {'train', 'val', 'test'} ;
I just wanted to build a fully connect layer and I found that the output was not a 1_1_k matrix. The first two dimensions was not 1. The layer was same as the fully connect layer in imagenet example. Why that happened?
Has any body madea speed comparation with cuda convnet2 or caffe?I try to reimplement a paper. Using matconvnet is really wonderful.However,I have run 5000 epoch using GPU GTX 550 and 1 million backpropagation which costs me roughly 2 days.The author of the paper have run 8*10^8 backpropagation which costs roughly 3 days using GTX 770 GPU and cuda convnet2. Is there anything wrong? I think maybe there is something wrong with my code.But I still want to wonder the speed comparation with cuda convnet2 or caffe.Thanks!
After resetting momentum, in the line 105~114 of cnn_train.m,
the copied layer "ly" from the "net.layers" is not applied again to "net.layers".
while running mnist or cifar test on win7 x64 matlab r2012a I get
Error using zeros
Leading inputs must be numeric.
Error in cnn_train (line 36)
net.layers{i}.filtersMomentum = zeros('like',net.layers{i}.filters) ;
Error in cnn_mnist (line 75)
[net,info] = cnn_train(net, imdb, @GetBatch, ...
Error in run (line 57)
evalin('caller', [s ';']);
Hello
I am having trouble with the makefile for windows.I am using the latest version of matconvnet 1.0beta7.
Thanks for the help
Hi,
I am doing binary classification for grayscale images 256x256 size, their structure being mostly small objects (cancer mammograms) in a empty background. For first layer I choose convolutional with 32 filters of 4x4 size but I get an error:
Error using vl_nnconv The number of filter groups does not divide the total number of filters.
Error in vl_simplenn (line 153)
res(i+1).x = vl_nnconv(res(i).x, l.filters, l.biases, 'pad', l.pad, 'stride', l.stride) ;
Maybe the vl_nnconv has some implementation details. Any help will be kindly appreciated. Thanks.
Hi,
I followed the installation and compilation instructions on matconvnet to the best of my knowledge. I was then able to run vl_setupnn and vl_test_nnlayers successfully.
However, upon running cnn_imagenet_minimal.m, I get the following error :
Segmentation violation detected at Wed Nov 19 16:28:45 2014
Abnormal termination:
Segmentation violation
Stack Trace (from fault):
[ 0] 0x00007fea84d068ca /export/rel60_shadow/glue.umd.edu/software/matlab/2014a
/Linux/bin/glnxa64/../../sys/os/glnxa64/libstdc++.so.6+00489674
[ 1] 0x00007fe9a810bc3e /usr/local/cuda/lib64/libcudart.so.5.0+0010
1438
[ 2] 0x00007fe9a80f9c0f /usr/local/cuda/lib64/libcudart.so.5.0+0002
7663 __cudaRegisterFatBinary+00000031
[ 3] 0x00007fe93ce5cef1 /export/rel60_shadow/glue.umd.edu/software/matlab/2014a
/Linux/bin/glnxa64/libmwmagma.so+00204529
[ 4] 0x00007fe93cf927d6 /export/rel60_shadow/glue.umd.edu/software/matlab/2014a
/Linux/bin/glnxa64/libmwmagma.so+01472470
[ 5] 0x00007fe93ce50ff3 /export/rel60_shadow/glue.umd.edu/software/matlab/2014a
/Linux/bin/glnxa64/libmwmagma.so+00155635
This error was detected while a MEX-file was running. If the MEX-file
is not an official MathWorks function, please examine its source code
for errors. Please consult the External Interfaces Guide for information
on debugging MEX-files.
Any suggestions or help ?
hi
thanks a lot for offering that great tool box. I am trying to run cnn_mint but I have this error:
learning rate changed (0.000000 --> 0.001000): resetting momentum
training: epoch 01: processing batch 1 of 600 ...Error using .*
Array dimensions must match for binary array op.
Error in vl_nnsoftmax (line 30)
Y = Y .* bsxfun(@minus, dzdY, sum(dzdY .* Y, 3)) ;
Error in vl_simplenn (line 211)
res(i).dzdx = vl_nnsoftmax(res(i).x, res(i+1).dzdx) ;
Error in cnn_train (line 140)
res = vl_simplenn(net, im, one, res, ...
Error in cnn_mnist (line 75)
[net, info] = cnn_train(net, imdb, @GetBatch, ...
I really appreciate it if you help to solve that error because I am trying to use the library to classify my own data.
Hi,
I am trying to use matconvnet's vl_nnconv
to speed up convolutions, however as I was testing, I noticed differences between the output of Matlab's built-in convn
function and vl_nnconv
.
Please consider the following simple codes:
A = single(magic(10));
B = single(magic(5));
c1 = convn(A, B, 'valid');
c2 = vl_nnconv(A, B, []);
assert(isequal(c1, c2))
Here are the results of these codes on my local machine:
c1
c1 =
12000 13875 13000 14600 17250 17550
16600 15500 12300 14675 15850 18075
17625 14225 14250 14800 17025 18250
16050 13975 15100 18600 18275 16825
16300 15750 18950 19950 16975 15050
18225 19000 21075 19475 15625 12675
c2
c2 =
15950 12775 10400 13350 17200 20150
13300 14400 15650 15225 17950 17675
14225 18925 18250 17050 16125 15550
17750 18525 18050 15200 14225 15025
19450 20000 18750 15800 14875 14850
19475 20000 21175 18225 15575 15275
I am not sure if I am using vl_nnconv
correctly, so please let me know if I am not. If I am using the function correctly, do you know why there is a difference between the output of these two functions?
Thank you for your time and help!
First of all, this library is really awesome. Kudos to you guys!
One weird issue I just found about numerical reproducibility: the GPU-mode backward pass in certain networks (more specifically, caffe-ref/-alex and vgg-f/-m) produces slightly different gradients (at least in the bottom-most input layers) every time it runs, but is fine in other networks (e.g. vgg-s and verydeep-16/-19). The GPU-mode forward pass, as well as the CPU-mode forward and backward passes, on the other hand, is fine in all networks. Any idea what may cause this? I'm using GTX Titan and Matlab R2014b with matconvnet compiled against CUDA 6.5 here.
Thanks a lot!
I did a binary classification task that I changed the opt.errorType from "multiclass" to "binary" in cnn_train.m . Accordingly, when the training process update the error by the function:
info = updateError(opts, info, net, res, speed)
The code get into the section:
case 'binary'
error = bsxfun(@times, predictions, labels) < 0 ;
info.error(end) = info.error(end) + sum(error(:))/n ;
Rather than evaluate the error in multiclass section, the binary error computation is to count the number of predictions that are different from the ground truth labels. Then, I got a large error number (10,000) in each epoch (even it may decreases a little). I am not sure if this is a bug or this is the error values for binary classification?
Hi guys,
I am compiling the gpu version and got following error:
/home/gg/matlab2013/bin/mex: 2: matlab/src/config/mex_CUDA_maci64.xml: Syntax error: newline unexpected
make: *** [matlab/mex/vl_nnconv.mexa64] Error 2
Anyone can help?
(This issue is similar to the #34 one)
I think i am missing something in the response (#34).Could you please elaborate on this?
Specifically, when i change the last layer from softmaxloss->softmax in e.g. cnn_cifar example, then i get an
"Error using .* Array dimensions must match for binary array op."
in
"Error in vl_nnsoftmax (line 30)
Y = Y .* bsxfun(@minus, dzdY, sum(dzdY .* Y, 3)) ;"
Hi,
How would it be possible to implement a multi task neural network using MatConvNet?
Basically, how can I split the output of the final shared layer and send it to multiple loss layers, and combine the errors for backpropagation accordingly, using MatConvNet?
Hi,
I just downlead the pack and compile it successfully (w/o GPU).
However, when I run "vl_test_nnlayers" I got this error (in case 8)
I delete case 8 and rerun vl_test_nnlayers, it works without error.
The problem of case 8 is that some values are less than tau.
assert(all(abs(a(:)-b(:)) < tau)) ;
How to solve this problem? Or, I can go without testing case 8?
Error using vl_testsim (line 27)
Assertion failed.
Error in vl_testder (line 23)
vl_testsim(dzdx, dzdx_, tau);
Error in vl_test_nnlayers (line 381)
vl_testder(@(x) vl_nnnoffset(x,param), x, dzdy, dzdx, 1e-3*range) ;
When I tried to compile the toolbox, all was well until I got the following error ( Matlab R2013a,
GeForce GT 610, ubuntu 12.04, CUDA 6.5)
== Output from make ==
echo matlab/mex/vl_nnconv.mexa64
matlab/mex/vl_nnconv.mexa64
MW_NVCC_PATH='/usr/local/cuda-6.5/bin/nvcc' /usr/local/MATLAB/R2013a/bin/mex
-output "matlab/mex/vl_nnconv.mexa64"
"matlab/src/vl_nnconv.cu" matlab/src/bits/im2col.o matlab/src/bits/pooling.o matlab/src/bits/normalize.o matlab/src/bits/subsample.o matlab/src/bits/im2col_gpu.o matlab/src/bits/pooling_gpu.o matlab/src/bits/normalize_gpu.o matlab/src/bits/subsample_gpu.o
-DENABLE_GPU -f matlab/src/config/mex_CUDA_glnxa64.sh -largeArrayDims -lmwblas -L/usr/local/cuda-6.5/lib64 -lcublas -lcudart
2> >( sed 's/^(.)(([0-9][0-9])): ([ew].*)/\1:\2: \3/g' >&2 )
gcc-4.4: No such file or directory
mex: compile of ' "matlab/src/vl_nnconv.cu"' failed.
make: *** [matlab/mex/vl_nnconv.mexa64] Error 1
== End ==
Makefile options:
ENABLE_GPU=1
ENABLE_IMREADJPEG=1
DEBUG=1
ARCH=glnxa64
CUDAROOT=/usr/local/cuda-6.5
MATLABROOT=/usr/local/MATLAB/R2013a
Why is there a reference to gcc-4.4 even though gcc -v shows my system version as 4.8.2. Also, during compilation, mex displayed the following warning(not sure how relevant it is)
Warning: You are using gcc version "4.8.2". The version
currently supported with MEX is "4.4.x".
For a list of currently supported compilers see:
http://www.mathworks.com/support/compilers/current_release/
Hi,
Does anyone has an idea of how to optimize the vl_nnloss layer? I've built a net work for semantic segmentation (output of size 128x128 with 100 as batchsize). But the current version can only process 25 images per sec (on K40c), which is very slow. In fact, the vl_nnloss is the most time-consuming part (90+%), since one has to compare the predicted per-pixel map with groundtruth. I am trying to optimize the current version.
hi,
I successfully compiled the matconvnet and installed nvidia driver (340) and cuda 5.5 toolkit.
However when i run cnn_mnist.m with useGPU = true.
i got following error messages: col2im:CUDA Kernel error: invalid device function
im2col:CUDA Kernel error: invalid device function
details:
Matlab R2014a,
GTX 750 ti,
ubuntu 12.04.
CUDA 5.5 toolkit
matconvnet, compiled with GPU enabled
I understand validation set is used to tune some of the parameters in training, what are they? How can I make use of validation set in MatConvNet. To me validation seems to be the same as testing set.
Thanks!
If my number of possible label is only 2 (binary classification, matconvnet will give me error. I believe this is because I don't have at least 5 possible labels. How can I solve this?
Index exceeds matrix dimensions.
Error in cnn_train>updateError (line 271)
info.topFiveError(end) = info.topFiveError(end) + ...
Error in cnn_train (line 167)
info.train = updateError(opts, info.train, net, res, batch_time) ;
Error in stroke_nn_medicalData_011415 (line 76)
[net, info] = cnn_train(net, medical_record_imdb, @GetBatch, ...
Hi,
After run cnn_mnist.m, I have few net-epoch-n.mat models.
To predict one image on mnist , following the example in http://www.vlfeat.org/matconvnet/pretrained
When calling
net=load('./data/mnist-baseline/net-epoch-5.mat');
res=vl_simplenn(net.net, im); % im is just one mnist image whose size is 28*28
I got the error:
Error using vl_nnsoftmaxloss (line 42) Assertion failed.
Error in vl_simplenn (line 164)
res(i+1).x = vl_nnsoftmaxloss(res(i).x, l.class) ;
When i interrupt the example code, say cnn_mnist
with opts.train.useGpu = false
, and change it to GPU mode by doing opts.train.useGpu = true
, it gives errors like
resuming by loading epoch 59
training: epoch 60: processing batch 1 of 600 ...Error using vl_nnconv
DATA and FILTERS are not both CPU or GPU arrays.
Also, if I complete run can_mnist
in GPU mode, it will skip to run again in CPU mode.
Any idea that I can switch btw CPU and GPU freely and start from the beginning as I wish?
Thanks in advance.
I compiled the code with GPU option enable and there are no error messages, however, when I try to run vl_test_nnlayers, it gave me this:
vl_test_nnlayers(true)
test number 1
testing vl_nnsoftamxloss multiple images convolutional
test number 2
testing vl_nnloss multiple images convolutional
testing vl_nnloss multiple images
testing vl_nnloss
test number 3
testing vl_nnsoftmax
test number 4
testing vl_nnconv with fully connected identity filters
Invalid MEX-file
'~/matconvnet-master/matlab/mex/vl_nnconv.mexa64':
libcudart.so.6.5: cannot open shared object file: No such
file or directory
I am using CUDA 6.5 and gcc 4.4.
Thanks!
Hi!
Im checking the implementation in C++ of the local response normalization here:
https://github.com/vlfeat/matconvnet/blob/master/matlab/src/bits/normalize.cpp
Based on Hinton´s paper http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf we have that the response-normalized activity is given by the formula (notation adapted for readability):
which totally fits with the implementation mentioned above. To get the back-propagation formulas we have that if
then
If im not wrong, this maps to the C++ implementation as
which should lead to the formula
however, the implementation is (lines 276-280)
Note the change zat(q) -> zat(t). Is there anything wrong there that I didnt notice?
Thank you!
Urko
on win 7 x64 matlab R2012a I get error
Warning: Escape sequence 'm' is not valid. See 'help sprintf' for valid
escape sequences.
In cnn_train at 94
In cnn_mnist at 75
In run at 57
modelPath = fullfile(opts.expDir, 'net-epoch-%d.mat') ;
because fullfile reverse slashes and then
filename= sprintf(modelPath, epoch);
doesn't work
also more corerct is to use filesep
opts.expDir = ['data' filesep 'exp' ];
Hi there,
Just want to make sure if it should be
if (data.geom.height + (padTop+padBottom) < poolHeight || data.geom.width + (padLeft+padRight) < poolWidth) {
...
}
instead of
if (data.geom.height < poolHeight || data.geom.width < poolWidth) {
...
}
at Line 243 in matconvnet/matlab/src/vl_nnpool.cu .
Let me know if I'm correct.
Hi,
I was wondering what changes would I need to make to incorporate an Euclidean loss function in place of the standard log loss function.
Would simply passing the outputs of the softmax layer and calculating the difference suffice?
If I use this as the final layer, do I need to compute the derivative?
Hi
I can compile the MatConvNet successfully on my Mac OS X by the instruction:
make ENABLE_GPU=y ARCH=maci64 MATLABROOT=/Applications/MATLAB_R2014a.app CUDAROOT=/Developer/NVIDIA/CUDA-6.5
But I failed at the last step when I want to compile vl_imreadjpeg by
make ENABLE_IMREADJPEG=y
which shows a fatal error:
fatal error: 'jpeglib.h' file not found
include <jpeglib.h>
This error happens at line 17 of the file vl_imreadjpeg.c. My platform is OS X 10.9 + Xcode 6.1.1 (Clang) + CUDA 6.5 + Matlab 2014a.
I checked my installed libraries, where the Homebrew tells me the "jpeg-8d" has been installed.
Thank you for your help!
I got an error message of function vl_nnconv() when I run the vl_test_nnlayers.m or cnn_mnist (or other examples) under the GPU mode on my iMac (OS X 10.9 + CUDA 6.5 + GeForce GTX 775M + Matlab 2014a). The error message is:
Error using vl_nnconv An input is not a numeric array (or GPU support not compiled).
The same code is running correctly under the CPU mode. In addition, I also installed the Matconvnet toolbox on another machine with Ubuntu 14.04 + CUDA 6.5 + GeForce GTX 770 + Matlab 2014a, where the same code runs well under either GPU or CPU mode.
In detail, the problem happens at line 153 of function vl_simplenn(), which calls:
res(i+1).x = vl_nnconv(res(i).x, l.filters, l.biases, 'pad', l.pad, 'stride', l.stride) ;
I checked all the input data of this function, where res(i).x, l.filters and l.biases are all GpuArray type.
Then, I tracked the code of Matconvnet. The problem happens at the function void packed_data_init_with_array(PackedData * map, mxArray const* array) of the file nnhelper.h, which is called by vl_nnconv.cu. This function regards all the input data ( res(i).x, l.filters and l.biases) as non-GpuArray type, which is determined by line 120 of file nnhelper.h:
if (!mxIsNumeric(array)) {
mexErrMsgTxt("An input is not a numeric array (or GPU support not compiled).") ;
}
The question here is I have no idea why mxIsNumeric() does not regard the input array as GpuArray?
I tried to search the solution to this problem online, but got nothing. Could anyone help me on this? Is that caused by the compatibility between OS X and the CUDA? Thanks!
https://github.com/vlfeat/matconvnet/blob/master/matconvnet-manual.pdf is not built, I see the .tex file but I'm lazy
Hi all,
When the code is doing back prop through the conv layer, it reports an error that says "unexpected error during CUDA execution: CUDA_ERROR_LAUNCH_FAILED".
The matrix sizes involved in the back propagation is as below:
vl_nnconv: mode gpu; backward
vl_nnconv: stride: [1 1], pad: [1 2 3 3], numGroups: 1, has bias: 1, has filters: 1, fully connected: 0
vl_nnconv: data: 63 x 84 x 512 x 1 [10.3 MB]
vl_nnconv: filters: 4 x 7 x 512 x 2 [0.1 MB]
vl_nnconv: biases: 1 x 2 x 1 x 1 [0.0 MB]
vl_nnconv: derOutput: 63 x 84 x 2 x 1 [0.0 MB]
vl_nnconv: derData: 63 x 84 x 512 x 1 [10.3 MB]
vl_nnconv: derFilters: 4 x 7 x 512 x 2 [0.1 MB]
vl_nnconv: derBiases: 1 x 2 x 1 x 1 [0.0 MB]
vl_nnconv: temp: 63 x 84 x 14336 x 1 [289.4 MB]
vl_nnconv: temp (cached): 63 x 84 x 14336 x 1 [289.4 MB]
vl_nnconv: allOnes: 63 x 84 x 1 x 1 [0.0 MB]
vl_nnconv: allOnes (cached): 375 x 500 x 1 x 1 [0.7 MB]
By attaching cuda-gdb with cuda-memcheck, we can detect the memory error:
Illegal access to address (@global)0xb063d7900 detected.
Program received signal CUDA_EXCEPTION_1, Lane Illegal Address.
[Switching focus to CUDA kernel 2, grid 167, block (72,0,0), thread (96,0,0), device 0, sm 7, warp 1, lane 0]
0x00007fff73e39b40 in ??<<<(74,20,1),(256,1,1)>>> ()
By stepping, I locate the error line: at line 130 in matlab/src/vl_nnconv.cu
cublasSgemm(...
To reproduce the problem, we could run
data = gpuArray(rand(63, 84, 512, 1, 'single'));
filters = gpuArray(rand(4, 7, 512, 2, 'single'));
biases = gpuArray(rand(1, 2, 1, 1, 'single'));
derOutput = gpuArray(rand(63, 84, 2, 1, 'single'));
[derData derFilters derBiases] = vl_nnconv(data, filters, biases, derOutput, 'pad', [1 2 3 3], 'stride', [1 1]);
I checked the memory usage, I feel it will not exceed the limit. Here is my GPU info:
CUDADevice with properties:
Name: 'Tesla K40c'
Index: 1
ComputeCapability: '3.5'
SupportsDouble: 1
DriverVersion: 6.5000
ToolkitVersion: 5.5000
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.2079e+10
FreeMemory: 1.1946e+10
MultiprocessorCount: 15
ClockRateKHz: 745000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 0
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
I'm not quite familiar with cuBLAS. My hunch is that some limitation of memory might be violated during executing cublasSgemm.
May I have some suggestions from you guys? Thanks!
Peiyun
Run the script " bugscript.m" once, and everything is fine. Run it the 2nd time, and the whole matlab crashes.
---------------- bugscript.m -------------------------------------------------------------
clear all; close all; clc; % if you comment this line out, it will not crash
% setup
run('../3rd party/matconvnet/matlab/vl_setupnn');
% cnn parameters
h = 2; w = 2;
net = load('../data/cnn/pretrained/imagenet-vgg-f.mat');
featureExtractor = @(im) cnnfeat_wrap( im, net );
% load image
im = repmat( imread('cameraman.tif'), [1 1 3] );
% extract features
feat = featureExtractor(im);
---------------- bugscript.m -------------------------------------------------------------
where the wrapper function is
---------------- cnnfeat_wrap.m -------------------------------------------------------------
function [ feat ] = cnnfeat_wrap( im, net )
lidx = 16;
res = vl_simplenn(net, single(im)-120, [], [], 'disableDropout', true);
feat = res(lidx).x;
end
---------------- cnnfeat_wrap.m -------------------------------------------------------------
Other details:
Macbook Pro 15 retina, late 2013
Mac OSX 10.9.5 (13F34)
XCode Version 5.1.1 (5B1008)
Matlab 2014a (8.3.0.532) maci64
CUDA 5.5
matconvnet, compiled with GPU enabled
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.