Giter Club home page Giter Club logo

universal-domain-adaptation's Introduction

Universal Domain Adaptation

Code release for Universal Domain Adaptation(CVPR 2019)

Note

As the focus of my research has moved away from domain adaptation, this code repository may be obsolete someday. We are delighted to see that universal domain adaptation has received tremendous attention in the academic community, and readers are encouraged to discuss related questions with the authors of follow-up papers.

Requirements

  • python 3.6+
  • PyTorch 1.0

pip install -r requirements.txt

Usage

  • download datasets

  • write your config file

  • python main.py --config /path/to/your/config/yaml/file

  • train (configurations in officehome-train-config.yaml are only for officehome dataset):

    python main.py --config officehome-train-config.yaml

  • test

    python main.py --config officehome-test-config.yaml

  • monitor (tensorboard required)

    tensorboard --logdir .

Checkpoints

We provide the checkpoints for officehome datasets at Google Drive.

Citation

please cite:

@InProceedings{UDA_2019_CVPR,
author = {You, Kaichao and Long, Mingsheng and Cao, Zhangjie and Wang, Jianmin and Jordan, Michael I.},
title = {Universal Domain Adaptation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

Contact

universal-domain-adaptation's People

Contributors

youkaichao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

universal-domain-adaptation's Issues

Question about the setting of 'source share weight' (and 'target share weight').

Interesting work! I have read this paper and agreed with the assumption about Domain similarity and Prediction uncertainty, but I am confused about the calculation method of w^s(x) in eq.7 and w^t(x) in eq.8. What is the exact meaning of uncertainty minus similarity (eq.7)? And why are w^s(x) and w^t(x) calculated in opposite ways?

Look forward to your reply! Thanks

Visualization

How Can I visualize the result image of domain adaptation?
I'd appreciate it if you could explain it with an dataset you used for your paper.

Thanks.

Question about your code

Hello, after reading the paper and the code, I have a few questions about them. Wish you could answer them for me :)
The UAN model proposed in the paper looks like this

TIM图片20200214122712.
In my view, the feature z extracted from F is loaded into G, D, D' separately. However, the model in the code is defined as follow:

def forward(self, x):
        f = self.feature_extractor(x)
        f, _, __, y = self.classifier(f)
        d = self.discriminator(_)
        d_0 = self.discriminator_separate(_)
        return y, d, d_0

The feature f goes into classifier first but the output of the first layer of the classifier goes into the discriminator? Why?

Besides, the training of the network can be seen as a minimax game of equation (4). However, the code just added all the loss up loss = ce + adv_loss + adv_loss_separate. How to understand this?

Thanks

Pretraining on ImageNet

You had mentioned that the backbone network is ResNet-50 pretrained on Imagenet.

self.model_resnet = models.resnet50(pretrained=True)

But in many experiments in the paper, the labels in the unsupervised target overlap with that of the supervised source labels in the ImageNet. Is it justified that you pretrain on supervised ImageNet labels?

error of train

Thanks for your impressive work! When I run your code in my workstation, it always reports "[easydl] tensorflow not available!" And I satisfy the requirements which you mentioned in "readme". What should I do?
1

the coefficient scheduler for gradient reversal module

self.grl = GradientReverseModule(lambda step: aToBSheduler(step, 0.0, 1.0, gamma=10, max_iter=10000))

It seems that max_iter is set as 10000.

From the DANN paper, I thought it depends on the total iteration number during training.
Is there any reason to use a fixed number?

Thank you.

why normalize_weight need div torch.mean(x)?

def normalize_weight(x):
min_val = x.min()
max_val = x.max()
x = (x - min_val) / (max_val - min_val)
x = x / torch.mean(x)
return x.detach()

according to paper, x in (0,1), why normalize_weight need div torch.mean(x)?

question about code

Reusing the Task-specific Classifier as a Discriminator: Discriminator-free Adversarial Domain Adaptation. I have met difficult in programming, the code in github is not full, who can help me. [email protected]

Training Details about office-31, visda 2017 and imagenet-caltech

The paper and the released code miss some training details about the mentioned dataset. Could you provide the yaml files of office-31, visda 2017 and imagenet-caltech?
In addition, the usage of dataset imagenet-caltech is quite different in paper SAN and PADA. Could you provide the label files of imagenet-caltech used in the paper?
Thanks!

Question about threshold w_0 and hyper-paramete λ

Hello, I have read the paper and code, and not found the details about how to calculate the threshold w_0 and hyper-parameter λ.

Specifically, w_0 is seemed to be a fixed value for every dataset in the code, which confused me.

For different datasets, should calculate different w_0 and λ? and how to define a w_0 to help us distinguish unknown samples from others in the target domain? Or I missed some details, please help give me some suggestions.
Thank you

在运行python main.py --config officehome-train-config.yaml中遇到的问题

您好,最近拜读Universal Domain Adaptation,对UAN算法非常感兴趣,谢谢你们分享代码!在调试代码的过程中,我遇到了几个问题,想请教一下:
1.pretrained_model可以采用离线下载的pytorch预训练模型吗?例如,通过‘https://download.pytorch.org/models/resnet50-19c8e357.pth’下载ResNet-50的预训练模型进行实验
2.在Google Colab上运行“!python main.py --config office-train-config.yaml”时,遇到了图中的报错信息,该如何解决?
ab637c18669d538f4d78b51f4b45000
感谢你的阅读,期待回复!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.