Giter Club home page Giter Club logo

caffe's People

Contributors

arntanguy avatar chyojn avatar cnikolaou avatar ddtm avatar dgolden1 avatar ducha-aiki avatar erictzeng avatar jamt9000 avatar jeffdonahue avatar jyegerlehner avatar kloudkl avatar longjon avatar lukeyeager avatar mavenlin avatar mohomran avatar mtamburrano avatar netheril96 avatar philkr avatar pluskid avatar qipeng avatar rbgirshick avatar ronghanghu avatar sergeyk avatar sguada avatar shelhamer avatar ste-m5s avatar tdomhan avatar tnarihi avatar yangqing avatar yosinski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

caffe's Issues

Build failed on ubuntu 14.04 CUDA 4.0 CUDNN 7.5

I cloned the repository and in the caffe folder I ran
mkdir build
cd build
cmake ..
make

The following error occur:

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "cudnnAddMode_t" is incompatible with parameter of type "const void *"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: too many arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: argument of type "const void *" is incompatible with parameter of type "cudnnConvolutionBwdFilterAlgo_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: argument of type "float *" is incompatible with parameter of type "size_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: too few arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: argument of type "const void *" is incompatible with parameter of type "cudnnConvolutionBwdDataAlgo_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: argument of type "float *" is incompatible with parameter of type "size_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: too few arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=float]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "cudnnAddMode_t" is incompatible with parameter of type "const void *"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: argument of type "const void *" is incompatible with parameter of type "cudnnTensorDescriptor_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(81): error: too many arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Forward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: argument of type "const void *" is incompatible with parameter of type "cudnnConvolutionBwdFilterAlgo_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: argument of type "double *" is incompatible with parameter of type "size_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(127): error: too few arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: argument of type "const void *" is incompatible with parameter of type "cudnnConvolutionBwdDataAlgo_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: argument of type "double *" is incompatible with parameter of type "size_t"
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

/home/myan/Documents/DANN/caffe/src/caffe/layers/cudnn_conv_layer.cu(142): error: too few arguments in function call
detected during instantiation of "void caffe::CuDNNConvolutionLayer::Backward_gpu(const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &, const std::vector<__nv_bool, std::allocator<__nv_bool>> &, const std::vectorcaffe::Blob<Dtype *, std::allocatorcaffe::Blob<Dtype *>> &) [with Dtype=double]"
(159): here

20 errors detected in the compilation of "/tmp/tmpxft_0000b2a8_00000000-7_cudnn_conv_layer.cpp1.ii".
CMake Error at cuda_compile_generated_cudnn_conv_layer.cu.o.cmake:264 (message):
Error generating file
/home/myan/Documents/DANN/caffe/build/src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_cudnn_conv_layer.cu.o

make[2]: *** [src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_cudnn_conv_layer.cu.o] Error 1
make[1]: *** [src/caffe/CMakeFiles/caffe.dir/all] Error 2
make: *** [all] Error 2

However I can build the latest official caffe without error.

can not find the scripts

Where is the following script ?

./examples/adaptation/experiments/amazon_to_webcam/scripts/train.sh

About mnist experiments

Hi, thanks for your nice work!
Can I ask how many epochs did you apply in MNIST->MNIST-M experiments?
and did you apply xavier weights on all network?

example failing in cpu mode

I followed the instructions in README.md to run the example.
I downloaded the example data from https://drive.google.com/file/d/0B4IapRTv9pJ1WGZVd1VDMmhwdlE/view?pli=1
and included CPU_ONLY := 1 in Makefile.config.
After executing ./examples/adaptation/scripts/prepare_experiments.sh <path>,
I removed the gpu setting from ./examples/adaptation/experiments/amazon_to_webcam/scripts/train.sh and switched to solver_mode: CPU in examples/adaptation/experiments/amazon_ to_webcam/protos/solver.prototxt.
executing the training script, I receive an error message and the program is aborted:

F0804 16:37:13.917794 21538 db.hpp:116] Check failed: mdb_status == 0 (2 vs. 0) No such file or directory
*** Check failure stack trace: ***
@ 0x7f2471d52778 (unknown)
@ 0x7f2471d526b2 (unknown)
@ 0x7f2471d520b4 (unknown)
@ 0x7f2471d55055 (unknown)
@ 0x7f24720b735e caffe::db::LMDB::Open()
@ 0x7f24720eed02 caffe::DataLayer<>::DataLayerSetUp()
@ 0x7f24720d4402 caffe::BaseDataLayer<>::LayerSetUp()
@ 0x7f24720d44f9 caffe::BasePrefetchingDataLayer<>::LayerSetUp()
@ 0x7f247216b240 caffe::Net<>::Init()
@ 0x7f247216c2f5 caffe::Net<>::Net()
@ 0x7f247208f82f caffe::Solver<>::InitTrainNet()
@ 0x7f24720909b0 caffe::Solver<>::Init()
@ 0x7f2472090b86 caffe::Solver<>::Solver()
@ 0x40d9b0 caffe::GetSolver<>()
@ 0x406e12 train()
@ 0x4047b7 main
@ 0x7f246e58cb45 (unknown)
@ 0x404dd0 (unknown)
@ (nil) (unknown)
Aborted

(running on debian jessie)

script perform different accuracy than reported in the paper

I think you conduct a very good work. But when I reproduce the results, I come across some confusion. The results is very different from the results in your paper.
The results I reproduce are list below:
image
the paper reports results as
image
The reproduced amazon_to_webcam result is higher than that in the paper, while the other two goes the opposite direction.
If the parameter setting in the script is different from that used in the paper? How can I change it?

t-SNE Code

Can you provide t-SNE code that you used for your paper please?

Trained model

Where can I get the trained model on office dataset(A-W) ????

Hyper-parameter selection ?

It seems that the reverse validation (rv) approach (as mentioned in the paper) to hyper-parameter selection has not been carried out in the repository. So I wanted to ask some of the gory implementation details that you used for reporting in the paper -

  1. For the rv procedure you retrain the neural network multiple times using different parameters and test on the validation splits and choose the hyperparameter with the best result. Once you find the best hyper-parameter, you retrain the neural network using all the data and report accuracy on test data. Correct me if you did not follow such a policy.

2)Also, reverse validating a neural network over a range of hyparameters followed by retraining using the best hyperparameters, takes time . So, did you use a parallel procedure to report such results.

the loss is unchanged

It seems the code doesn't work. I follow the instrucment as the readme file but the loss is unchanged(dc_loss is almost 0.69+-, lp_loss is almost 3.36+-). What's wrong with that?

"caffe.LayerParameter" has no field named "gradient_scaler_param"

Hello,

I am trying to reproduce the results from your ICML 2015 paper. I have installed caffe and pycaffe from your forked version of it, and run the prepare_experiments scripts. However, when I try
./examples/adaptation/experiments/amazon_to_webcam/scripts/train.sh

I get the following error:

I0908 14:17:37.180552 2839 solver.cpp:91] Creating training net from net file: /mnt/workspace2/ganin_caffe/examples/adaptation/experiments/amazon_to_webcam/protos/train_val.prototxt

[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 555:25: Message type "caffe.LayerParameter" has no field named "gradient_scaler_param".

F0908 14:17:37.180871 2839 upgrade_proto.cpp:79] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /mnt/workspace2/ganin_caffe/examples/adaptation/experiments/amazon_to_webcam/protos/train_val.prototxt

(I get the same error with regard to the use of the option cursor: SHUFFLING in the data layers, but it is less bothering as not related to your method per se.)

Would you see how to fix this? Thanks!

prepare_experiments.sh

after run the prepare_experiments.sh,i get two folders:datasets and experiments.
but the datasets is empty,and i am pretty sure i have change the path...
i am so confused.....
can anyone tell me what to do?

Will the training process converge to a local minimum?

Hi,
I am trying to use your concept for adapting a semantic segmentor (FCN).
But I found that after the error rate of the domain classifier began to increase,
the source domain test error rate began to increase, too.

Finally, the source domain test error rate increase to almost 100%. It seemed to be that the whole training process crashed.

Have you encountered similar phenomena?

Thank you so much for your reply!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.