Giter Club home page Giter Club logo

mc-cnn's People

Contributors

jzbontar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mc-cnn's Issues

A question about the network.

The siamese network is two shared-weight sub-network (one for left patch and one for right patch), and what puzzles me is the relationship between the positive and negtive samples, do they sharing weight? If they did, the gradient -1 for positive and 1 for negtive represent for what? Looking forward to your reply~

How can I run this code

Hi, Jure
Thank you for sharing this code.
However, I don't know how to use it. Could you tell me what software I need to run this code. If you have wrote an instruction and post that online. could you send me the link. Thank you so much!

How to set the disparity range from two ends and how to use 'ad' and 'census' to generate disparity map directly without using the constructed network?

Hi everyone,

Anyone knows: How to set the disparity range from two ends? e.g. if we want to search from 100 to 300 as (100, 300) to find the suitable disparity for each pixel, how can we give this range to the code? Now, only 'disp_max' is used to make the searching disparity range as (0, disp_max)!

Also how to just use 'ad' and 'census' to generate disparity map directly without using the neural network, as the author did in his paper to compare the results of the constructed neural network with normal dense matching algorithms?

Any help is appreciated!

luajit: c++ exception

while i'm running this command :./preprocess_kitti.lua, it outputs:
dataset 2012
1
luajit: c++ exception
what's wrong with it?

Error rate tested different with the results of jzbontar

@jzbontar Hello, I met some problems when I tried to reproduce your work. When I tried to compute the error rate as it in README, for example, by typing $ ./main.lua kitti fast -a test_te -net_fname net/net_kitti_fast_-a_train_all.t7, I got a much greater error rate than it written in the paper, about 11%. I don't know what's the problem, and the error rate I tested are all greater than the results in paper. I also tried to test on the network you have trained. However, the results are the same as I got before. And the time I tested is less than yours. It just like I skipped some steps, while I didn't find the problem. I want to ask you if you have any idea about it, or there are some thing I need to consider when I'm testng? I also want to ask if there are any persons met the same problem as me? Looking forward for your reply, thanks!

A question about your work

I’m confused about a question.In your paper, we can see your three implementation details kept the running time manageable.While you training the network with 9*9 patches, if you want to compute for all pixels in a single forward pass by propagating full-resolution images, I guess there must be a part of data-processing, can you give me detailed descriptions about this part? Hope your early reply.

when changed the trained network,it occured

hi,its a amazing work thank you ,and I just run your trained network,but I got a problem,as the image: ![2018-01-10 18-49-49](https://user-images.githubusercontent.com/29095675/34769022-870dbee0-f637-11e7-9c6f-9f2152d94749.png), when I used the trained kitti 2012 fast,it worked well.but when I changed it to kitti 2015 slow,it gives the problem: cudnnFindConvolutionForwardAlgorithm failed: 2 convDesc=[mode : CUDNN_CROSS_CORRELATION datatype : CUDNN_DATA_FLOAT] hash=-dimA2,112,375,1242 -filtA112,112,3,3 2,112,375,1242 -padA1,1 -convStrideA1,1 CUDNN_DATA_FLOAT I dont know how to make it,beg your answer.

Problem about the results compared with Middlebury displayed results

Hi,
Thanks for your code, and after I ran the slow network to predict the trainingH datasets, I got much worser results than the results displayed on the Middlebury website, I wonder where I did something wrong, can you help me out ?
The command is as follows:
./main.lua mb slow -a predict -net_fname net/net_mb_slow_-a_train_all.t7 -left im0.png -right im1.png -disp_max 190
And one of the results can be compared as follows: The error rate(nocc) for Vintage by running the code is 34% while the result of Vintage image on Middlebury (MC-CNN-acrt ) is 24.8%.
Do I miss something important? Thanks a lot !

About the memory of GPU for running the data set

Thank you for sharing this code.
I have a question about the memory of NVIDIA GPU for processing Middlebury data set. Can we use a NVIDIA GPU with memory of 11G (like GeForce 1080Ti) to run the Middlebury data set? Will this memory capacity end up with out-of-memory exception? or just make the runtime slower?

Thanks so much!

Stereo_predict function issue

Hi I run into an issue with the stereo_predict function which is from
local outlier = torch.CudaTensor():resizeAs(disp[2]):zero()
when I use the kitti2015 dataset.

The output is this

bad argument #1 to 'resizeAs' (torch.CudaTensor expected, got torch.CudaLongTensor)

Does anyone know what is the problem? Thank you in advance!

Read 0 blocks instead of 1

I'm trying to run this command:

./main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70

And, I'm getting this error message:

kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70
Found Environment variable CUDNN_PATH = /usr/local/cuda/lib64/libcudnn.so.5luajit: /home/june/torch/install/share/lua/5.1/torch/File.lua:259: read error: read 0 blocks instead of 1 at /home/june/torch/pkg/torch/lib/TH/THDiskFile.c:352
stack traceback:
[C]: in function 'readInt'
/home/june/torch/install/share/lua/5.1/torch/File.lua:259: in function 'readObject'
/home/june/torch/install/share/lua/5.1/torch/File.lua:409: in function 'load'
./main.lua:898: in main chunk
[C]: at 0x00405d50

I already try to reinstall torch

Setup query regarding Png++ install (unrelated to this repository)

I'm trying to install Png++ with no sucess. I followed the instructions and tried to 'make' the installation.

This is my output from the first make command:

shreyas@shreyas:~/png++-0.2.9$ make
make -C example
make[1]: Entering directory /home/shreyas/png++-0.2.9/example' Makefile:45: pixel_generator.dep: No such file or directory g++ -M -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g pixel_generator.cpp -o- | \ sed 's/\(pixel_generator\)\.o[ :]*/\1.o pixel_generator.dep : /g' > pixel_generator.dep make[1]: Leaving directory /home/shreyas/png++-0.2.9/example'
make[1]: Entering directory /home/shreyas/png++-0.2.9/example' g++ -c -o pixel_generator.o pixel_generator.cpp -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g g++ -o pixel_generator pixel_generator.o -L/usr/local/lib -L/home/shreyas/anaconda2/lib -lpng16 -g make[1]: Leaving directory /home/shreyas/png++-0.2.9/example'

And when I try to run make test, this is my ouput:

shreyas@shreyas:~/png++-0.2.9$ make test
make test -C test
make[1]: Entering directory /home/shreyas/png++-0.2.9/test' Makefile:64: convert_color_space.dep: No such file or directory Makefile:64: generate_gray_packed.dep: No such file or directory Makefile:64: read_write_gray_packed.dep: No such file or directory Makefile:64: generate_palette.dep: No such file or directory Makefile:64: write_gray_16.dep: No such file or directory Makefile:64: read_write_param.dep: No such file or directory Makefile:64: dump.dep: No such file or directory g++ -M -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g dump.cpp -o- | \ sed 's/\(dump\)\.o[ :]*/\1.o dump.dep : /g' > dump.dep g++ -M -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g read_write_param.cpp -o- | \ sed 's/\(read_write_param\)\.o[ :]*/\1.o read_write_param.dep : /g' > read_write_param.dep g++ -M -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g write_gray_16.cpp -o- | \ sed 's/\(write_gray_16\)\.o[ :]*/\1.o write_gray_16.dep : /g' > write_gray_16.dep g++ -M -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g generate_palette.cpp -o- | \ sed 's/\(generate_palette\)\.o[ :]*/\1.o generate_palette.dep : /g' > generate_palette.dep g++ -M -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g read_write_gray_packed.cpp -o- | \ sed 's/\(read_write_gray_packed\)\.o[ :]*/\1.o read_write_gray_packed.dep : /g' > read_write_gray_packed.dep g++ -M -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g generate_gray_packed.cpp -o- | \ sed 's/\(generate_gray_packed\)\.o[ :]*/\1.o generate_gray_packed.dep : /g' > generate_gray_packed.dep g++ -M -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g convert_color_space.cpp -o- | \ sed 's/\(convert_color_space\)\.o[ :]*/\1.o convert_color_space.dep : /g' > convert_color_space.dep make[1]: Leaving directory /home/shreyas/png++-0.2.9/test'
make[1]: Entering directory `/home/shreyas/png++-0.2.9/test'
g++ -c -o convert_color_space.o convert_color_space.cpp -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g
g++ -o convert_color_space convert_color_space.o -L/usr/local/lib -L/home/shreyas/anaconda2/lib -lpng16 -g
g++ -c -o generate_gray_packed.o generate_gray_packed.cpp -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g
g++ -o generate_gray_packed generate_gray_packed.o -L/usr/local/lib -L/home/shreyas/anaconda2/lib -lpng16 -g
g++ -c -o read_write_gray_packed.o read_write_gray_packed.cpp -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g
g++ -o read_write_gray_packed read_write_gray_packed.o -L/usr/local/lib -L/home/shreyas/anaconda2/lib -lpng16 -g
g++ -c -o generate_palette.o generate_palette.cpp -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g
g++ -o generate_palette generate_palette.o -L/usr/local/lib -L/home/shreyas/anaconda2/lib -lpng16 -g
g++ -c -o write_gray_16.o write_gray_16.cpp -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g
g++ -o write_gray_16 write_gray_16.o -L/usr/local/lib -L/home/shreyas/anaconda2/lib -lpng16 -g
g++ -c -o read_write_param.o read_write_param.cpp -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g
g++ -o read_write_param read_write_param.o -L/usr/local/lib -L/home/shreyas/anaconda2/lib -lpng16 -g
g++ -c -o dump.o dump.cpp -Wall -I.. -I/usr/local/include -I/home/shreyas/anaconda2/include/libpng16 -g
g++ -o dump dump.o -L/usr/local/lib -L/home/shreyas/anaconda2/lib -lpng16 -g
./test.sh
PNG++ FAILS TESTS (154 OF 1566 FAILED)
review test.log for clues

make[1]: *** [test] Error 1
make[1]: Leaving directory `/home/shreyas/png++-0.2.9/test'
make: *** [test] Error 2

I'm hoping someone here knows how to fix this issue. I'm a bit lost. Any help is greatly appreciated.

FYI, this is what my test.log file looks like (154 errors)

convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/basi2c16.png.GRAY.16.out cmp/pngsuite/basi2c16.png.GRAY.16.out differ: byte 60, line 3
out/pngsuite/basi2c16.png.GRAY.16.out cmp/pngsuite/basi2c16.png.GRAY.16.out differ: byte 60, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/basi2c16.png.GA.16.out cmp/pngsuite/basi2c16.png.GA.16.out differ: byte 53, line 3
out/pngsuite/basi2c16.png.GA.16.out cmp/pngsuite/basi2c16.png.GA.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/basi6a16.png.GRAY.16.out cmp/pngsuite/basi6a16.png.GRAY.16.out differ: byte 53, line 3
out/pngsuite/basi6a16.png.GRAY.16.out cmp/pngsuite/basi6a16.png.GRAY.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/basi6a16.png.GA.16.out cmp/pngsuite/basi6a16.png.GA.16.out differ: byte 53, line 3
out/pngsuite/basi6a16.png.GA.16.out cmp/pngsuite/basi6a16.png.GA.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/basn2c16.png.GRAY.16.out cmp/pngsuite/basn2c16.png.GRAY.16.out differ: byte 53, line 3
out/pngsuite/basn2c16.png.GRAY.16.out cmp/pngsuite/basn2c16.png.GRAY.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/basn2c16.png.GA.16.out cmp/pngsuite/basn2c16.png.GA.16.out differ: byte 52, line 3
out/pngsuite/basn2c16.png.GA.16.out cmp/pngsuite/basn2c16.png.GA.16.out differ: byte 52, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/basn6a16.png.GRAY.16.out cmp/pngsuite/basn6a16.png.GRAY.16.out differ: byte 53, line 3
out/pngsuite/basn6a16.png.GRAY.16.out cmp/pngsuite/basn6a16.png.GRAY.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/basn6a16.png.GA.16.out cmp/pngsuite/basn6a16.png.GA.16.out differ: byte 53, line 3
out/pngsuite/basn6a16.png.GA.16.out cmp/pngsuite/basn6a16.png.GA.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/bgan6a16.png.GRAY.16.out cmp/pngsuite/bgan6a16.png.GRAY.16.out differ: byte 53, line 3
out/pngsuite/bgan6a16.png.GRAY.16.out cmp/pngsuite/bgan6a16.png.GRAY.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/bgan6a16.png.GA.16.out cmp/pngsuite/bgan6a16.png.GA.16.out differ: byte 53, line 3
out/pngsuite/bgan6a16.png.GA.16.out cmp/pngsuite/bgan6a16.png.GA.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/bgyn6a16.png.GRAY.16.out cmp/pngsuite/bgyn6a16.png.GRAY.16.out differ: byte 53, line 3
out/pngsuite/bgyn6a16.png.GRAY.16.out cmp/pngsuite/bgyn6a16.png.GRAY.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/bgyn6a16.png.GA.16.out cmp/pngsuite/bgyn6a16.png.GA.16.out differ: byte 53, line 3
out/pngsuite/bgyn6a16.png.GA.16.out cmp/pngsuite/bgyn6a16.png.GA.16.out differ: byte 53, line 3
out/pngsuite/s09i3p02.png.RGB.8.out cmp/pngsuite/s09i3p02.png.RGB.8.out differ: byte 58, line 3
out/pngsuite/s09i3p02.png.RGB.8.out cmp/pngsuite/s09i3p02.png.RGB.8.out differ: byte 58, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/tbbn2c16.png.GRAY.16.out cmp/pngsuite/tbbn2c16.png.GRAY.16.out differ: byte 53, line 3
out/pngsuite/tbbn2c16.png.GRAY.16.out cmp/pngsuite/tbbn2c16.png.GRAY.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/tbbn2c16.png.GA.16.out cmp/pngsuite/tbbn2c16.png.GA.16.out differ: byte 52, line 3
out/pngsuite/tbbn2c16.png.GA.16.out cmp/pngsuite/tbbn2c16.png.GA.16.out differ: byte 52, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/tbgn2c16.png.GRAY.16.out cmp/pngsuite/tbgn2c16.png.GRAY.16.out differ: byte 53, line 3
out/pngsuite/tbgn2c16.png.GRAY.16.out cmp/pngsuite/tbgn2c16.png.GRAY.16.out differ: byte 53, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
out/pngsuite/tbgn2c16.png.GA.16.out cmp/pngsuite/tbgn2c16.png.GA.16.out differ: byte 52, line 3
out/pngsuite/tbgn2c16.png.GA.16.out cmp/pngsuite/tbgn2c16.png.GA.16.out differ: byte 52, line 3
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED
convert_color_space: expected 8-bit data but found 16-bit; recompile with PNG_READ_16_TO_8_SUPPORTED

Thank you very much in advance!

Regards,
Shreyas

things about net_mb_slow_-a_train_all.t7

i use net_mb_slow_-a_train_all.t7 instead of net_kitti_fast_-a_train_all.t7. but the disparity map it produces has poor results. i can't up to expected value. why? who can help me ,thanks!

Cannot reproduce Middlebury evaluation results?

I'm using the trained networks. It seems that by running

"./main.lua mb fast -a predict -net_fname net/net_mb_fast_-a_train_all.t7 -left trainingH/Adirondack/im0.png -right trainingH/Adirondack/im1.png -disp_max 145"

the result is much worse than that on the official Middlebury table,
while by running

"./main.lua mb slow -a predict -net_fname net/net_mb_slow_-a_train_all.t7 -left trainingH/Adirondack/im0.png -right trainingH/Adirondack/im1.png -disp_max 145"

just do not produce reasonable result.
Is it because the trained networks are not the final version or something?

Thx

min() function with vol:cuda()

after updating cudnn (v4 to v6), I found that the result of min() changed a lot in x range (parameter: -sm_terminate cnn).

Running main.lua for training gives reshape function exception

I am able to run the test set correctly. But training gives me such error. The images are in the correct folder. What might be the problem?

./main.lua kitti slow -a train_trkitti
slow -a train_tr
luajit: ./main.lua:378: inconsistent tensor size, expected tensor [389 x 1 x 350 x 1242] and src [] to have the same number of elements, but got 169098300 and 0 elements respectively at /home/rohan140290/torch/pkg/torch/lib/TH/generic/THTensorCopy.c:86
stack traceback:
[C]: in function 'reshape'
./main.lua:378: in function 'fromfile'
./main.lua:428: in main chunk
[C]: at 0x00405d50

It seems like torch is not able to parse the file correctly.
x = torch.FloatTensor(torch.FloatStorage(fname)) :- Seems to have issues.

I instead tried to do this:-

for i = 1,#dim do
s = s * dim[i]
end

x = torch.FloatTensor(torch.FloatStorage(s))
torch.DiskFile(fname,'r'):binary():readFloat(x:storage())

This seems to work for me.

About the training time for accurate network on middlebury

Hi, @jzbontar ,
I wonder how long do you take to train the accurate network for middlebury for 14 epochs , because I found that the training on fast network for Kitti is much faster, while I train the accurate network on Middlebury dataset, It takes me about 2 weeks on a single TiTanX with 12G memory, I wonder if I did something wrong ?

Trying to resize storage that is not resizable

Hi everyone,

I'm trying to run this script: https://github.com/jzbontar/mc-cnn (I am trying to transform left.bin and right.bin to .png file which can be shown) which produces the following error:

$ ...luajit samples/bin2png.lua
Writing left.png
luajit: ...ocal/torch_update/install/share/lua/5.1/torch/Tensor.lua:462: Trying to resize storage that is not resizable at /usr/local/torch_update/pkg/torch/lib/TH/generic/THStorage.c:183
stack traceback:
[C]: in function 'set'
...ocal/torch_update/install/share/lua/5.1/torch/Tensor.lua:462: in function 'view'
samples/bin2png.lua:9: in main chunk
[C]: at 0x00406670

Anyone knows what is wrong? I found someone said like this:

"""This happens because of this commit torch/torch7#389
I think that the author of the mc-cnn code should update his normalize script to take into account this change in torch"""

If that is the reason, can I update the normalize script (Normalize2.lua?) by myself to make the code work? How?

I am new to machine learning and GPU programming! I use ubuntu 16.04 and a server (from intranet) with GeForce GTX TITAN installed. I appreciate any hints from you!^_^

error loading module 'libcv' from file './libcv.so':

Hi Jure,

Thanks so much for sharing this code. I am trying to running the code following your instruction. However I get a error when I run the command

./main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70

error loading module 'libcv' from file './libcv.so':
libcudart.so.7.5: cannot open shared object file: No such file or directory
stack traceback:
[C]: in function 'error'
/home/fyl/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
/home/fyl/MC-CNN/main.lua:5: in main chunk
[C]: in function 'dofile'
.../fyl/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406050

My CUDA version is 8.0, then I run the command : ldd libcv.so
Here are the output:
libcudart.so.7.5 => not found
libnppc.so.7.5 => not found
libnppi.so.7.5 => not found
libnpps.so.7.5 => not found
libcufft.so.7.5 => not found
libcudart.so.7.5 => not found
libnppc.so.7.5 => not found
libnppi.so.7.5 => not found
libnpps.so.7.5 => not found
libcufft.so.7.5 => not found

Do you have any suggestions about it? Thanks!

Yiliu

Question about training on combined datasets

Hi everyone,
i want to train on several datasets, lets say Kitti2012&Kitti2015 or Kitti2012&Middlebury.

The first idea was to combine the datasets in data.kitti/unzip/training/... and data.kitti2015/unzip/training/.... But i´m not quite sure if this is right due to different binary files like metadata.bin or dispnoc.bin.

Then i saw the option "-at" in the main.lua with an if-case opt.at == 1 then..., where the tensors of kitti2012&2015 will be concatenated.

So the questions are:

  • Is the network trained on kitti12/15, when i simply add the "-at 1" option in the command?
  • and is this also possible for the Middlebury dataset?
    Many thanks in advance!

Regards,
Marc

questions about using kitti slow models!

hi, when running your code with kitti fast model, everything is ok. Nevertheless, I encounter the following problem when running the code using kitti slow models:

yang@yang-All-Series:~/mc-cnn$ ./main.lua kitti slow -a predict -net_fname net/net_kitti_slow_-a_train_all.t7 -left samples/pics/out_00_01.png -right samples/pics/out_00_00.png -disp_max 7
kitti slow -a predict -net_fname net/net_kitti_slow_-a_train_all.t7 -left samples/pics/out_00_01.png -right samples/pics/out_00_00.png -disp_max 7 
THCudaCheck FAIL file=/home/yang/torch/extra/cutorch/lib/THC/generic/THCStorage.c line=147 error=77 : an illegal memory access was encountered
luajit: cuda runtime error (77) : an illegal memory access was encountered at /home/yang/torch/extra/cutorch/lib/THC/generic/THCStorage.c:147

My GPU is TitanX, and I think it should have been running well.
Hope for your reply.

Question about disparity image.

Hello.
In KITTI2012 data_stereo_flow\training\disp_noc\000000_10.png, I saw the maximum raw value is 17873. Is it the real disparity? Does it need any transformation?
Thanks a lot.

a puzzle about my modified network

Hi, I modified the fast network recently, which I replaced the first two layer with a 55 convolutional kernel. However, in the training process, the loss was decending just like the original fast network and the values were almost the same, but in the testing phase, the error were quite high and I think the disparity were total wrong. I really cannot understand the result. I wonder whether the 55 kernel is too large for the 9*9 patch, the parameters are not suitable for the network, or there is something wrong with my training. Wish you can give me some advice about the problem.Looking forward to your reply.

Accurate network about nn.Linear and nn.SpatialConvolution1_fw

Hi, @jzbontar
I notice that the you use nn.Linear in training network (slow) and replace it with nn.SpatialConvolution1_fw( I guess that is the same to 1 x 1 conv) in the test process, however, I wonder why don't you just use cudnn.SpatialConvolution with kernel size equals to 1 x 1 both in training process and testing process ? Will it affect the performance or just accelerate the training speed by using 1x1 conv (with cudnn) in all process ?
Thanks a lot !

problem about function sgm2 in adcensus.cu

near line 579, the code as follows:


for (int i = 256; i > 0; i /= 2) {
		if (d < i && d + i < size3 && output_min[d + i] < output_min[d]) {
			output_min[d] = output_min[d + i];
		}
		__syncthreads();
	}

Should the initial number of i to be the max disparity here?

Error when loading module

Hi Jure,

Thanks so much for sharing this code. I am trying to running the code following your instruction. However I got a error when I ran the command

./main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70

The error is

luajit: error loading module 'libadcensus' from file './libadcensus.so':
./libadcensus.so: undefined symbol: png_set_longjmp_fn
stack traceback:
[C]: at 0x0047aff0
[C]: in function 'require'
./main.lua:327: in main chunk
[C]: at 0x00406670

Do you have any suggestions about it? Thanks!

Yao

StereoJoin_forward2() missing

Hi Jure,

I was wondering about the fast Middlebury trained network, it seems to use nn.StereoJoin layer which calls adcensus.StereoJoin_forward2() in the forward pass, but there doesn't seem to be a definition for that function in the .cu file.

An error about cuDNN:CUDNN_STATUS_ALLOC_FAILED

hi,Jure
I am Hoo
Thank you share this code
I am a newer in cnn and stereo vision. and I have read your paper about mc-cnn.However, when I run your code on my computer. An error occurs. As show in fig
screenshot from 2016-11-08 17-55-09
My computer's GPU is NVIDIA GTX650 with 1g memory.and cuda vision is 8.0,cuDNN vision is 5.0. if I can run your code ,I will buy a better GPU to learning cnn.
I have search this error in Google,GPU memory is possible too small to run code.How can I change your code to decrease the GPU memory.or some suggestions else?
Thank you very much
Please forgive my bad english
Hoo

about adcensus.cu

I am writting a post-processing program named cbca_post in adcensus.cu, but it always give an error which described as "attempt to call field 'cbca_post (a nil value)' ". And when I try to change the name of cbca to cbca_pre, didn't modify the content of cbca, it also appear the same problem. What should I do if I want to add a code to the adcensus.cu? Really looking forward to you reply~

SpatialLogSoftMax & What about Multi-GPU use?

Was compiling the code and found that SpatialLogSoftMax has been added by torch. This configuration doesnt work now as the class names are the same. I looked at the code and got it to run, but just a heads up about this issue. I am curious if your code supports multiple GPU usage for training?

what is the preprocessing?

can i ask you about your work?

I don't understand about preprocessing with kitti data set.

their are several outs exist like x0.bin, x1.bin, metadata, nnz_tr, nnz_te...

look's like x0.bin is the image pixel data and metadata is the image imformation.
but I can't understand what is tr, te, nnz_tr and nnz_te.

nnz_tr = torch.FloatTensor(23e6, 4)?

what is 23e6 meaning?

got problem when running './main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70'

Hello all,

I am trying 'mc-cnn' but I got problem when I try to run './main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70' to use the pretrained network. The problem is like this (copy from terminal):
luajit: ./main.lua:324: module 'cunn' not found:
no field package.preload['cunn']
no file './cunn.lua'
no file '/usr/share/luajit-2.1.0-beta1/cunn.lua'
no file '/usr/local/share/lua/5.1/cunn.lua'
no file '/usr/local/share/lua/5.1/cunn/init.lua'
no file '/usr/share/lua/5.1/cunn.lua'
no file '/usr/share/lua/5.1/cunn/init.lua'
no file './cunn.so'
no file '/usr/local/lib/lua/5.1/cunn.so'
no file '/usr/lib/lua/5.1/cunn.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'require'
./main.lua:324: in main chunk
[C]: at 0x00406670
Anyone knows what is wrong? I am new to machine learning and GPU programming! I use ubuntu 16.04 and a server (from intranet) with GeForce GTX TITAN installed.

Invalid device function

Hi I use Torch7, OpenCV3, png++ (0.2.9) and libpng1.6 on an Ubuntu 16.04

I get following error when I run
./main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70

kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70
luajit: /home/ubuntu/mc-cnn/Normalize2.lua:11: invalid device function
stack traceback:
[C]: in function 'Normalize_forward'
/home/ubuntu/mc-cnn/Normalize2.lua:11: in function 'updateOutput'
./main.lua:911: in function 'forward_free'
./main.lua:945: in function 'stereo_predict'
./main.lua:1101: in main chunk
[C]: at 0x00405d50

What could be the issue

Problem loading model - 'Failed to load function from bytecode:' Error while loading model file

Hi,

So i'm currently trying to load a network model via torch on an Nvidia TX1. When I try to load the model

net = torch.load('modelfile.t7','ascii')

I get the following error:

bytecode-error

The model loads fine on my Ubuntu 14.04 desktop, so I tried loading the same model, converting it to binary and then trying to load the converted file

net = torch.load('modelfile.bin')

But i still get a similar error:

binary_error_model

I've noticed that a few people have had the same errors in the past but most people seem to have been able to get past this by using an 'ascii' version of the model since it's platform independent (?). I seem to have had no luck with that. The other set of individuals who faced this problem were on a 32bit system. But my Nvidia TX1 is currently running on Ubuntu 16.04 (64bit).

For anyone willing to recreate these results:

I installed JetPack (JetPack-L4T-2.3.1-linux-x64.run) and verified that my installation of CUDA 8.0 and OpenCV is functional.

For Torch, I used dusty-nv's installation script
https://github.com/dusty-nv/jetson-reinforcement
The installation script in particular is https://github.com/dusty-nv/jetson-reinforcement/blob/master/CMakePreBuild.sh
It all looks pretty straightforward.

The model file in specific is https://s3.amazonaws.com/mc-cnn/net_kitti_fast_-a_train_all.t7

Any tips on how to fix this problem is gladly appreciated. If anyone has any ideas on how I can tweak the model on my desktop machine to make it work here I'd love to hear it.

Thanks in advance,

Shreyas

a strange error while running the code

Hello dear Jure,

I am a newer in cnn. After looking your parper, I am trying to running the code following your instruction in web .However I got a error when I ran the command " ./main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70 ".
the result is:
$ ./main.lua kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70 -sm_terminate cnn
kitti fast -a predict -net_fname net/net_kitti_fast_-a_train_all.t7 -left samples/input/kittiL.png -right samples/input/kittiR.png -disp_max 70 -sm_terminate cnn
Writing right.bin, 1 x 70 x 370 x 1226
Writing left.bin, 1 x 70 x 370 x 1226
luajit: ./main.lua:1052: bad argument #1 to 'resizeAs' (torch.CudaTensor expected, got torch.CudaLongTensor)
stack traceback:
[C]: in function 'resizeAs'
./main.lua:1052: in function 'stereo_predict'
./main.lua:1098: in main chunk
[C]: at 0x004064f0

as it shows ,it can write the right.bin and the left.bin,but coudn't write the disp.bin.........
looking forword to your reply soon.

thank you very much

Question on how to set dataset size for training

Dear Mr. Zbontar,

after having gone through the main.lua I can't seem to find the argument which to pass in order so set the size for the dataset on which to train the network. I do know that with "./main.lua kitti fast -a train_tr" I train the network on a subset of the kitti dataset but I would like to do a test just as stated in table 9, pg. 26 of your paper with say just 20% of the entire picture set.
Any help is much appreciated.

Regards,
Raphael.

update torch problem occured

After update my torch use ./update.sh, I run the command below, and got the following error:
th ./main.lua mb slow -a predict -net_fname net/net_mb_slow_-a_train_all.t7 -left ../data/md/2005_2006/Wood2/view1.png -right ../data/md/2005_2006/Wood2/view5.png -disp_max 70

cudnnFindConvolutionForwardAlgorithm failed: 4 convDesc=[mode : CUDNN_CROSS_CORRELATION datatype : CUDNN_DATA_FLOAT] hash=-dimA2,1,1110,1306 -filtA112,1,3,3 2,112,1110,1306 -padA1,1 -convStrideA1,1 CUDNN_DATA_FLOAT
/home/fzehua/torch/install/bin/luajit: /home/fzehua/torch/install/share/lua/5.1/cudnn/find.lua:483: cudnnFindConvolutionForwardAlgorithm failed, sizes: convDesc=[mode : CUDNN_CROSS_CORRELATION datatype : CUDNN_DATA_FLOAT] hash=-dimA2,1,1110,1306 -filtA112,1,3,3 2,112,1110,1306 -padA1,1 -convStrideA1,1 CUDNN_DATA_FLOAT
stack traceback:
[C]: in function 'error'
/home/fzehua/torch/install/share/lua/5.1/cudnn/find.lua:483: in function 'forwardAlgorithm'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:190: in function 'updateOutput'
./main.lua:911: in function 'forward_free'
./main.lua:962: in function 'stereo_predict'
./main.lua:1101: in main chunk
[C]: in function 'dofile'
...ehua/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406670

Compiling on Jetson TX1

Hello,

unsure as to the causee (who is the culprit) of this issue but when making on NVidia Jetson TX1 I get the following error:

ubuntu@tegra-ubuntu:/var/git/mc-cnn$ make
nvcc -arch sm_35 -O3 -DNDEBUG --compiler-options '-fPIC' -o libadcensus.so --shared adcensus.cu -I/home/ubuntu/torch/install/include/THC -I/home/ubuntu/torch/install/include/TH -I/home/ubuntu/torch/install/include -I/usr/include/lua5.2 -L/home/ubuntu/torch/install/lib -Xlinker -rpath,/home/ubuntu/torch/install/lib -lluaT -lTHC -lTH -lpng
SpatialLogSoftMax.cu(20): error: identifier "THInf" is undefined

1 error detected in the compilation of "/tmp/tmpxft_00000445_00000000-9_adcensus.cpp1.ii".
make: *** [libadcensus.so] Error 2

After many many times of checking and recompiling my Torch installation I finally caved and patched the offender as such:

diff --git a/SpatialLogSoftMax.cu b/SpatialLogSoftMax.cu
index 5ea35d8..f430cff 100644
--- a/SpatialLogSoftMax.cu
+++ b/SpatialLogSoftMax.cu
@@ -16,7 +16,7 @@ __global__ void cunn_SpatialLogSoftMax_updateOutput_kernel
   if (idx < data_size) {
     int next_idx = idx + feature_size;
     float logsum = 0.0;
-    float max = -THInf;
+    float max = -FLT_MAX;
     // max
     for(int i = idx; i < next_idx; i += spatial_size) {
       if (max < input[i]) max = input[i];

This compiles.

a small question regarding adcensus.cu

I was looking for the docs of adcensus.cu, but cannot find one.
And later, I noticed your adcensus repo.

So is the adcensus.cu a part of your adcensus repo?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.