thuml / dan Goto Github PK
View Code? Open in Web Editor NEWThis project forked from zhuhan1236/transfer-caffe
Code release of "Learning Transferable Features with Deep Adaptation Networks" (ICML 2015)
License: Other
This project forked from zhuhan1236/transfer-caffe
Code release of "Learning Transferable Features with Deep Adaptation Networks" (ICML 2015)
License: Other
I called the training command like that "./build/tools/caffe train -solver models/DAN/amazon_to_webcam/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel" on my dual gpu machine.
After 2000 iteration, the training stopped. But I search the whole working directory and got nothing from that. What's happened after training is finished?
last message from training machin:
I1020 16:41:28.788825 2502 net.cpp:693] Ignoring source layer source_data
I1020 16:41:28.788828 2502 net.cpp:693] Ignoring source layer target_label_silence
I1020 16:41:28.788832 2502 net.cpp:693] Ignoring source layer concat_data
I1020 16:41:28.788841 2502 net.cpp:693] Ignoring source layer slice_features_fc7
I1020 16:41:28.788843 2502 net.cpp:693] Ignoring source layer source_features_fc7_slice_features_fc7_0_split
I1020 16:41:28.788846 2502 net.cpp:693] Ignoring source layer target_features_fc7_slice_features_fc7_1_split
I1020 16:41:28.788851 2502 net.cpp:693] Ignoring source layer source_features_fc8_fc8_source_0_split
I1020 16:41:28.788853 2502 net.cpp:693] Ignoring source layer softmax_loss
I1020 16:41:28.788856 2502 net.cpp:693] Ignoring source layer fc8_target
I1020 16:41:28.788859 2502 net.cpp:693] Ignoring source layer mmd_loss_fc7
I1020 16:41:28.788862 2502 net.cpp:693] Ignoring source layer mmd_loss_fc8
I1020 16:41:30.357834 2502 blocking_queue.cpp:50] Data layer prefetch queue empty
I1020 16:41:30.467955 2502 solver.cpp:407] Test net output #0: lp_accuracy = 0.403774
I1020 16:41:30.467969 2502 solver.cpp:325] Optimization Done.
I1020 16:41:30.497936 2502 caffe.cpp:254] Optimization Done.
Please use the caffe-users list for usage, installation, or modeling questions, or other requests for help.
Do not post such requests to Issues. Doing so interferes with the development of Caffe.
Please read the guidelines for contributing before submitting this issue.
If you are having difficulty building Caffe or training a model, please ask the caffe-users mailing list. If you are reporting a build error that seems to be due to a bug in Caffe, please attach your build configuration (either Makefile.config or CMakeCache.txt) and the output of the make (or cmake) command.
Operating system:
Compiler:
CUDA version (if applicable):
CUDNN version (if applicable):
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):
Please use the caffe-users list for usage, installation, or modeling questions, or other requests for help.
Do not post such requests to Issues. Doing so interferes with the development of Caffe.
Please read the guidelines for contributing before submitting this issue.
If you are having difficulty building Caffe or training a model, please ask the caffe-users mailing list. If you are reporting a build error that seems to be due to a bug in Caffe, please attach your build configuration (either Makefile.config or CMakeCache.txt) and the output of the make (or cmake) command.
Operating system:
Compiler:
CUDA version (if applicable):
CUDNN version (if applicable):
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):
I am running the J-MMD loss layer with a different dataset and the fc7_mmd_loss and label_mmd_loss are printed as zero throughout the training. I wonder what is wrong there. Any help to debug this will be appreciated. The softmax_loss works correctly so I think there is nothing wrong with the way I am providing the data.
When I run JAN with adversarial layer, I got the following error. Could you please help to figure out the problem?
F1005 09:04:07.176527 32676 filler.hpp:312] Check failed: false Unknown filler name: identity
*** Check failure stack trace: ***
@ 0x7ff27ce5a5cd google::LogMessage::Fail()
@ 0x7ff27ce5c433 google::LogMessage::SendToLog()
@ 0x7ff27ce5a15b google::LogMessage::Flush()
@ 0x7ff27ce5ce1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7ff27d4b2569 caffe::GetFiller<>()
@ 0x7ff27d58efb3 caffe::InnerProductLayer<>::LayerSetUp()
@ 0x7ff27d63d3b2 caffe::Net<>::Init()
@ 0x7ff27d63ec41 caffe::Net<>::Net()
@ 0x7ff27d606f5a caffe::Solver<>::InitTrainNet()
@ 0x7ff27d608257 caffe::Solver<>::Init()
@ 0x7ff27d6085fa caffe::Solver<>::Solver()
@ 0x7ff27d628ae3 caffe::Creator_SGDSolver<>()
@ 0x40b152 train()
@ 0x407878 main
@ 0x7ff27b811830 __libc_start_main
@ 0x408289 _start
@ (nil) (unknown)
Please use the caffe-users list for usage, installation, or modeling questions, or other requests for help.
Do not post such requests to Issues. Doing so interferes with the development of Caffe.
Please read the guidelines for contributing before submitting this issue.
If you are having difficulty building Caffe or training a model, please ask the caffe-users mailing list. If you are reporting a build error that seems to be due to a bug in Caffe, please attach your build configuration (either Makefile.config or CMakeCache.txt) and the output of the make (or cmake) command.
Operating system:
Compiler:
CUDA version (if applicable):
CUDNN version (if applicable):
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):
There occurring an error when I run cmake cmd,and the error details are available in the CMakeOutput.log and CMakeError.log following in the appendix
Operating system:ubuntu18.04
Compiler:gcc 7.5.0
CUDA version (if applicable):
CUDNN version (if applicable):
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):Python 3.6
In the JAN-A model on alexnet, it says that the best weight for "fc7_mmd_loss" is -0.3 if using adversial. I wonder if it is the same when applying JAN-A on Resnet50?
Hi,thuml:
Thanks for your great work. I am new to this field and recently I tried to run the DAN model on my task. However, I find that the mmd loss sometimes got a negative value. Is that normal?
I even tied get mmd loss between two same feature, and it always got a negative value. I think it should be 0. Am I right?
I have run your code (AlexNet) got 68.67% accuracy (target data) (adversarial). ANd always get fc7_mmd_loss = 0 and label_mmd_loss = 0. Could you help me out to solve this problem?
I1005 07:16:44.324632 17598 solver.cpp:407] Test net output #0: accuracy = 0.686792
I1005 07:16:44.494951 17598 solver.cpp:231] Iteration 2700, loss = 0.00387742
I1005 07:16:44.494974 17598 solver.cpp:247] Train net output #0: fc7_mmd_loss = 0 (* 0.3 = 0 loss)
I1005 07:16:44.494979 17598 solver.cpp:247] Train net output #1: label_mmd_loss = 0 (* 0.3 = 0 loss)
I1005 07:16:44.494998 17598 solver.cpp:247] Train net output #2: softmax_loss = 0.00387735 (* 1 = 0.00387735 loss)
I1005 07:16:44.495003 17598 sgd_solver.cpp:106] Iteration 2700, lr = 0.00112453
Please use the caffe-users list for usage, installation, or modeling questions, or other requests for help.
Do not post such requests to Issues. Doing so interferes with the development of Caffe.
Please read the guidelines for contributing before submitting this issue.
If you are having difficulty building Caffe or training a model, please ask the caffe-users mailing list. If you are reporting a build error that seems to be due to a bug in Caffe, please attach your build configuration (either Makefile.config or CMakeCache.txt) and the output of the make (or cmake) command.
Operating system:
Compiler:
CUDA version (if applicable):
CUDNN version (if applicable):
BLAS:
Python or MATLAB version (for pycaffe and matcaffe respectively):
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.