Giter Club home page Giter Club logo

largelossmatters's Introduction

Large Loss Matters in Weakly Supervised Multi-Label Classification (CVPR 2022) | Paper

Project page | Poster | Slide

Youngwook Kim1*, Jae Myung Kim2*, Zeynep Akata2,3,4, and Jungwoo Lee1,5

1 Seoul National Univeristy 2 University of Tübingen 3 Max Planck Institute for Intelligent Systems 4 Max Planck Institute for Informatics 5 HodooAI Lab

Primary contact : [email protected] (Homepage)

Abstract

Weakly supervised multi-label classification (WSML) task, which is to learn a multi-label classification using partially observed labels per image, is becoming increasingly important due to its huge annotation cost. In this work, we first regard unobserved labels as negative labels, casting the WSML task into noisy multi-label classification. From this point of view, we empirically observe that memorization effect, which was first discovered in a noisy multi-class setting, also occurs in a multi-label setting. That is, the model first learns the representation of clean labels, and then starts memorizing noisy labels. Based on this finding, we propose novel methods for WSML which reject or correct the large loss samples to prevent model from memorizing the noisy label. Without heavy and complex components, our proposed methods outperform previous state-of-the-art WSML methods on several partial label settings including Pascal VOC 2012, MS COCO, NUSWIDE, CUB, and OpenImages V3 datasets. Various analysis also show that our methodology actually works well, validating that treating large loss properly matters in a weakly supervised multi-label classification.

News ( 2023/4/5 )

Please check out our follow-up paper, Bridging the Gap between Model Explanations in Partially Annotated Multi-label Classification, which is accepted in CVPR 2023.

Dataset Preparation

See the README.md file in the data directory for instructions on downloading and setting up the datasets.

(NEW) We added an instruction for downloading metadata and images for the OpenImages V3 dataset. Please check it.

Model Training & Evaluation

You can train and evaluate the models by

python main.py --dataset [dataset] \
               --mod_scheme [scheme] \
               --delta_rel [delta_rel] \
               --lr [learning_rate] \
               --optimizer [optimizer]

where [data_path] in {pascal, coco, nuswide, cub}, [scheme] in {LL-R, LL-Ct, LL-Cp}, and [delta_rel] in {0.1, 0.2, 0.3, 0.4, 0.5}.

Currently we only support ''End-to-end'' training setting.

Quantitative Results

Artificially generated partial label datasets Real partial label dataset (OpenImages V3)

Qualitative Results

How to cite

If our work is helpful, please consider citing our paper.

@InProceedings{Kim_2022_CVPR,
    author    = {Kim, Youngwook and Kim, Jae Myung and Akata, Zeynep and Lee, Jungwoo},
    title     = {Large Loss Matters in Weakly Supervised Multi-Label Classification},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {14156-14165}
}

Acknowledgements

Our code is heavily built upon Multi-Label Learning from Single Positive Labels.

largelossmatters's People

Contributors

snucml avatar youngwk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

largelossmatters's Issues

Code on real partial label dataset

Thanks for your work!
It seems that the current code is only suited for one positive label. Could you share the code on the real partial label dataset training setting?

Ask for "LinearInit" Traing code

Thank you for your great work!
If it is convenient for you, could you share the training code under "LinearInit" training setting? Thanks a lot!

how to determine delta_rel

It looks like there is a correlation between the hyperparameter delta_rel and the noise of the dataset (the number of missing labels in the dataset). Can I ask how you determined delta_rel? Especially on a large dataset like OpenImages V3?

Learning rate tuning

Hi,
in the paper supplementary you mention you "search the learning rate by dividing the range between values in {0.01, 0.001, 0.0001, 0.00001} into quarters". Does this mean you search over learning rates 0.01, 0.0075, 0.050, 0.025, 0.01 etc? Over all learning rates, do you pick the run with the best validation accuracy and run that one once on the test set? Or do you calculate the test set result for each learning rate and report the best value?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.