cory-m / dccm Goto Github PK
View Code? Open in Web Editor NEWDeep Comprehensive Correlation Mining
License: GNU General Public License v3.0
Deep Comprehensive Correlation Mining
License: GNU General Public License v3.0
I have been trying to reproduce the results mentioned in the paper using the same libraries as you mentioned and using the exp configs provided by you. But the results I get on GTX 1080 Ti are completely different. Also, you mentioned that the training takes only 19 hours but instead it has taken me around 3 days to train it on CIFAR100 for just 100 epochs and still its training.
Are these config files correct? Has anybody else been able to reproduce the results?
Below are the results which i get
STL10--->
[2020-04-12 17:56:34,340][main.py][line:272][INFO][rank:0] Epoch: [199/200] ARI against ground truth label: 0.182
[2020-04-12 17:56:34,353][main.py][line:273][INFO][rank:0] Epoch: [199/200] NMI against ground truth label: 0.296
[2020-04-12 17:56:34,372][main.py][line:274][INFO][rank:0] Epoch: [199/200] ACC against ground truth label: 0.368
CIFAR10 (training still not finished in 3 days)--->
[2020-04-13 03:57:40,966][main.py][line:272][INFO][rank:0] Epoch: [106/200] ARI against ground truth label: 0.305
[2020-04-13 03:57:40,968][main.py][line:273][INFO][rank:0] Epoch: [106/200] NMI against ground truth label: 0.407
[2020-04-13 03:57:40,968][main.py][line:274][INFO][rank:0] Epoch: [106/200] ACC against ground truth label: 0.463
CIFAR100 (training still not finished in 3 days)--->
FutureWarning)
[2020-04-13 03:48:53,305][main.py][line:272][INFO][rank:0] Epoch: [105/200] ARI against ground truth label: 0.169
[2020-04-13 03:48:53,307][main.py][line:273][INFO][rank:0] Epoch: [105/200] NMI against ground truth label: 0.282
[2020-04-13 03:48:53,308][main.py][line:274][INFO][rank:0] Epoch: [105/200] ACC against ground truth label: 0.308
Hi, is there a file to list all the classes of Tiny-ImageNet used in your paper?
It seems that line 145
in main.py
is the cause for an index out of range in subsequent lines when training on the last batch of a dataset that has size non divisible by large_bs
.
Correcting the line as follows should fix the issue
#index_loc = np.arange(args.large_bs)
# We use the actual batch size instead of the parameter to avoid errors on truncated batches
index_loc = np.arange(input_tensor.size()[0])
can you please upload other architectures and the config yml files for other datasets such as STL10, and TinyImageNet?
Also, in the supplementary you mention avg pool is used with stride 2 for cifar10/100 but in the code you use stride 4? Which one is correct?
Dear author, your research results are great and enlightening, but after I read the code, I want to know where the mutual information is reflected in the code as a loss function.
Hello Sir,
For Clustering, I think that the target(label) needs not for clustering.
But I saw targets(labels) on your code.
Targets(Labels) do need on training for your code??
Thanks,
Edward Cho.
Sorry to bother you, but i found a contradiction in your code.
I read your paper, and find the difference between the E.q(14) and your actual code in dim_Model.py.
In your dim_Model.py, your code is:
# local loss
Ej = -F.softplus(-self.local_D(Y_cat_M)).mean()
Em = -F.softplus(self.local_D(Y_cat_M_fake)).mean()
local_loss = -(Em + Ej)
but for E.q(14), it should be:
local_loss = -( Ej-Em)
Is some thing wrong in my analysis?
Or it's an error in your code?
Looking forward to your reply!
Thanks for your code.
I found it is difficult to get the same data as your meta file. Could you provide your cifar10 data download link?
Thank you
would you please add more details to download cifar10 with png format?
I want to use a model with more layers (eg. vgg) to deal with problems in my field, but your code only provides the cifar_c4_L2 model. If I want to customize my own model, what modifications should I make?
I noticed that there are layers, c_layer and classifier in the cifar10.yaml file. Is this related to calculating the mutual information between the shallow and deep layers?
And how should this value be modified in the custom model?
Excuse me, i have a question that cant solve for a long time.
I want to use your code to deal with clustering task by using NUS-WIDE dataset.
I organize these images on your suggestion in 'data' folder and put nus_lable.txt in 'meta' folder.
Then i add 'resize' operation in your 'mc_dataset.py'
It's just about:
1.from skimage.transform import resize
2.img = resize(img, (32, 32)) ------before 30 line
Besides, I just modified cifar10.yaml for:
small_bs: 16
workers: 4
num_classes: 4
Other than the above, I didn't modify any other files.(Except some notes I wrote)
I dont know how to solve this problem.
Is something wrong for my modification?
Or your main.py has something need to revise?
I would appreciate it if you could help me solve this problem!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.