alinlab / m2m Goto Github PK
View Code? Open in Web Editor NEWCode for the paper "M2m: Imbalanced Classification via Major-to-minor Translation" (CVPR 2020)
Home Page: http://arxiv.org/abs/2004.00431
Code for the paper "M2m: Imbalanced Classification via Major-to-minor Translation" (CVPR 2020)
Home Page: http://arxiv.org/abs/2004.00431
Hi,
Very nice work. Could anyone guide how to train celebA or other image datasets. The provided scripts are only for Cifaar datasets.
Cheers
Hello All,
This is Sourav Das, currently working as a Project-Linked Person at Indian Statistical Institute, Kolkata.
I went through your CVPR 2020 paper, titled "M2m: Imbalanced Classification via Major-to-minor Translation" and would like to reproduce the result on CIFAR10 and CIFAR100 datasets.
I cloned the GitHub repo https://github.com/alinlab/M2m but I don't know
Awaiting an early response from you.
Thanks & Regards,
Sourav Das
great work!
it seems to should add "--resume" when training M2m. (load ckpt_g) friendly warning.
by the way, do you apply M2m on larger dataset, such as Imagenet?
great work!
The proposed method can be easily used for classification task. Could be it used for detection task? If so, it will be fantastic.
Hi,
Thanks for your interesting work. I'd like to reproduce the experiments on CelebA-5, but it needs to be processed based on the official version as described in your paper. Could you please share a copy of the processed dataset or the script for processing?
Best Regards,
Hongxin
i have recently learnt python and went through the some MOOCS of machine learning and deep learning. I have downloaded the whole code from this site.
Any one kindly guide me how to run the code and reproduce the results ??
Hello,
Great work guys,
I have a small question regarding the baseline classifier g.
Which loss are you using to train it? Is it Vanilla CE Loss? Because when analyzing accuracy per class on the unbalanced training data using the baseline classifier g with the weights that you provided, I found these already good accuracy values for unbalanced training: [99.8200, 99.8332, 99.2205, 97.8644, 98.6047, 97.1576, 97.8448, 99.2806, 97.5904, 96.0000].
Does it make sense to train it on more sophisticated losses like Weighted CE or so, to allow the model pick the features related to the minority samples better? Because on my dateset, the baseline classifier trained with CE performs really badly with a low accuracy near 5% on the minority sample of the training set. i.e. How well should the baseline classifier perform on the minority samples of the training data?
It would be great if you share the training procedure for reproducing the baseline classifier g (schedule, sampling methods, loss..).
I look forward to hearing from.
Many many thanks
Best,
Hssan
Hi, great work!
I have trained cifar10_r10, cifar100_r10 and cifar100_r100. ALL the bACC of 3 ERM models is better but cifar100_r100 M2m model has SOTA performance. Can you tell me how to do to improve my model performance of c10_r10 and c100_r10?
Hello! I hope this message finds you well. I am curious if you are willing to add a license for the M2m project. I have been following your work closely, and I'm impressed with the quality and usefulness of the code you have shared.
I believe that adding an open-source license to the repository would benefit the community by providing clarity on the terms of use. I understand that you have the right to choose the license for your project, and I fully respect your decision. If you haven't yet selected a license or would be open to considering suggestions, I would be happy to provide some recommendations based on the project's goals and the community's needs.
Thank you for your time and consideration. I truly appreciate your contributions to the open-source community and look forward to your response. Please let me know if you require any further information or if there's anything else I can do.
Warm regards,
Michelle Min (michelle1223)
Could you tell me how you use T-SNE? Thank you!
Line 167 in 42d08a5
p_accept
is supposed to be "per-class" probability distribution. This is according to the definition of torch.multinomial
. However, the implementation is treating it as a per-sample probability distribution. This is not how the subsection "Practical implementation via re-sampling" of https://arxiv.org/pdf/2004.00431.pdf describes this to be implemented. Could you shed some light on this? Thanks!A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.