ddtm / caffe Goto Github PK
View Code? Open in Web Editor NEWThis project forked from bvlc/caffe
Caffe: a fast open framework for deep learning.
Home Page: http://caffe.berkeleyvision.org/
License: Other
This project forked from bvlc/caffe
Caffe: a fast open framework for deep learning.
Home Page: http://caffe.berkeleyvision.org/
License: Other
I cloned the repository and in the caffe folder I ran
mkdir build
cd build
cmake ..
make
The following error occur:
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "cudnnAddMode_t" is incompatible with parameter of type "const void *"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: too many arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: argument of type "const void *" is incompatible with parameter of type "cudnnConvolutionBwdFilterAlgo_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: argument of type "float *" is incompatible with parameter of type "size_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: too few arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: argument of type "const void *" is incompatible with parameter of type "cudnnConvolutionBwdDataAlgo_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: argument of type "float *" is incompatible with parameter of type "size_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: too few arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "cudnnAddMode_t" is incompatible with parameter of type "const void *"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: too many arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: argument of type "const void *" is incompatible with parameter of type "cudnnConvolutionBwdFilterAlgo_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: argument of type "double *" is incompatible with parameter of type "size_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: too few arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: argument of type "const void *" is incompatible with parameter of type "cudnnConvolutionBwdDataAlgo_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: argument of type "double *" is incompatible with parameter of type "size_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: too few arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here
20 errors detected in the compilation of "/tmp/tmpxft_0000b2a8_00000000-7_cudnn_conv_layer.cpp1.ii".
CMake Error at cuda_compile_generated_cudnn_conv_layer.cu.o.cmake:264 (message):
Error generating file
/home/myan/Documents/DANN/caffe/build/src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_cudnn_conv_layer.cu.o
make[2]: *** [src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_cudnn_conv_layer.cu.o] Error 1
make[1]: *** [src/caffe/CMakeFiles/caffe.dir/all] Error 2
make: *** [all] Error 2
However I can build the latest official caffe without error.
Where is the following script ?
./examples/adaptation/experiments/amazon_to_webcam/scripts/train.sh
Hi, thanks for your nice work!
Can I ask how many epochs did you apply in MNIST->MNIST-M experiments?
and did you apply xavier weights on all network?
Can anybody confirm what accuracy they are getting using this code in all the 3 experiments.
I followed the instructions in README.md to run the example.
I downloaded the example data from https://drive.google.com/file/d/0B4IapRTv9pJ1WGZVd1VDMmhwdlE/view?pli=1
and included CPU_ONLY := 1
in Makefile.config.
After executing ./examples/adaptation/scripts/prepare_experiments.sh <path>
,
I removed the gpu setting from ./examples/adaptation/experiments/amazon_to_webcam/scripts/train.sh
and switched to solver_mode: CPU
in examples/adaptation/experiments/amazon_ to_webcam/protos/solver.prototxt
.
executing the training script, I receive an error message and the program is aborted:
F0804 16:37:13.917794 21538 db.hpp:116] Check failed: mdb_status == 0 (2 vs. 0) No such file or directory
*** Check failure stack trace: ***
@ 0x7f2471d52778 (unknown)
@ 0x7f2471d526b2 (unknown)
@ 0x7f2471d520b4 (unknown)
@ 0x7f2471d55055 (unknown)
@ 0x7f24720b735e caffe::db::LMDB::Open()
@ 0x7f24720eed02 caffe::DataLayer<>::DataLayerSetUp()
@ 0x7f24720d4402 caffe::BaseDataLayer<>::LayerSetUp()
@ 0x7f24720d44f9 caffe::BasePrefetchingDataLayer<>::LayerSetUp()
@ 0x7f247216b240 caffe::Net<>::Init()
@ 0x7f247216c2f5 caffe::Net<>::Net()
@ 0x7f247208f82f caffe::Solver<>::InitTrainNet()
@ 0x7f24720909b0 caffe::Solver<>::Init()
@ 0x7f2472090b86 caffe::Solver<>::Solver()
@ 0x40d9b0 caffe::GetSolver<>()
@ 0x406e12 train()
@ 0x4047b7 main
@ 0x7f246e58cb45 (unknown)
@ 0x404dd0 (unknown)
@ (nil) (unknown)
Aborted
(running on debian jessie)
Hi,
I am unable to access the office dataset. Can you please provide another link?
Thanks
Ashish
I think you conduct a very good work. But when I reproduce the results, I come across some confusion. The results is very different from the results in your paper.
The results I reproduce are list below:
the paper reports results as
The reproduced amazon_to_webcam result is higher than that in the paper, while the other two goes the opposite direction.
If the parameter setting in the script is different from that used in the paper? How can I change it?
Can you provide t-SNE code that you used for your paper please?
Where can I get the trained model on office dataset(A-W) ????
Hi,
Since there are 2 domain labels, the layer connected to SigmoidCrossEntropyLoss shouldnt be 2 instead of 1? please, see here: https://github.com/ddtm/caffe/blob/grl/examples/adaptation/protos/train_val.prototxt#L654
It seems that the reverse validation (rv) approach (as mentioned in the paper) to hyper-parameter selection has not been carried out in the repository. So I wanted to ask some of the gory implementation details that you used for reporting in the paper -
2)Also, reverse validating a neural network over a range of hyparameters followed by retraining using the best hyperparameters, takes time . So, did you use a parallel procedure to report such results.
It seems the code doesn't work. I follow the instrucment as the readme file but the loss is unchanged(dc_loss is almost 0.69+-, lp_loss is almost 3.36+-). What's wrong with that?
Hello,
I am trying to reproduce the results from your ICML 2015 paper. I have installed caffe and pycaffe from your forked version of it, and run the prepare_experiments scripts. However, when I try
./examples/adaptation/experiments/amazon_to_webcam/scripts/train.sh
I get the following error:
I0908 14:17:37.180552 2839 solver.cpp:91] Creating training net from net file: /mnt/workspace2/ganin_caffe/examples/adaptation/experiments/amazon_to_webcam/protos/train_val.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 555:25: Message type "caffe.LayerParameter" has no field named "gradient_scaler_param".
F0908 14:17:37.180871 2839 upgrade_proto.cpp:79] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /mnt/workspace2/ganin_caffe/examples/adaptation/experiments/amazon_to_webcam/protos/train_val.prototxt
(I get the same error with regard to the use of the option cursor: SHUFFLING in the data layers, but it is less bothering as not related to your method per se.)
Would you see how to fix this? Thanks!
after run the prepare_experiments.sh,i get two folders:datasets and experiments.
but the datasets is empty,and i am pretty sure i have change the path...
i am so confused.....
can anyone tell me what to do?
Hi,
I am trying to use your concept for adapting a semantic segmentor (FCN).
But I found that after the error rate of the domain classifier began to increase,
the source domain test error rate began to increase, too.
Finally, the source domain test error rate increase to almost 100%. It seemed to be that the whole training process crashed.
Have you encountered similar phenomena?
Thank you so much for your reply!!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.