Giter Club home page Giter Club logo

test-time-adaptation's People

Contributors

goirik-chakrabarty avatar mariodoebler avatar robmarsden avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

test-time-adaptation's Issues

Adding new method SANTA

Hi authors,

Thank you for your excellent work in benchmarking test-time adaptation. I am the author of SANTA [https://openreview.net/forum?id=V7guVYzvE4]. I would like to add my method to your benchmark.

What are the proper steps to submit a pull request? Do I need to do all the experiments that you perform?

Regards.

wideresnet object has no attribute 'model'

Hello, I'm glad to hear that you found my paper interesting.
Regarding the error I encountered while running the code,
even though I followed the instructions in the readme and made sure the virtual environment version matches,

err message is..

Traceback (most recent call last):
File "/root/workspace/code/rmt/classification/test_time.py", line 483, in
evaluate('"Evaluation.')
File "/root/workspace/code/rmt/classification/test_time.py", line 78, in evaluate
model, param_names = setup_rmt(base_model, num_classes)
File "/root/workspace/code/rmt/classification/test_time.py", line 409, in setup_rmt
rmt_model = RMT(model, optimizer,
File "/root/workspace/code/rmt/classification/methods/rmt.py", line 54, in init
self.feature_extractor, self.classifier = split_up_model(model, arch_name, self.dataset_name)
File "/root/workspace/code/rmt/classification/models/model.py", line 313, in split_up_model
if hasattr(model.model, model.model.pretrained_cfg["classifier"]):
File "/root/anaconda3/envs/rmt/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1269, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'WideResNet' object has no attribute 'model'

could you please provide more information about the error message and what step you were trying to execute?

best

a basic question about TTA

Hi,
I have a basic question:
Is the main difference between test-time adaptation and sourece-free domain adaptation in the testing phase, where test-time domain adaptation is trested after only one training epoch?

Cannot run the RMT method without exemplars

Hi,
As in the RMT method I would like to run it in no source data mode. I set the cfg.Source.Percentage to 0.0 and it fails when creating the src loader in the constructor.
Would you be able to provide the version that works without exemplars?

Best

The settings of continual test-time domain adaptation

Hi, i am a phd student who's new to continual test-time domain adaptation.
In "Robust Mean Teacher for Continual and Gradual Test-Time Adaptation", the setting of continual test-time domain adaptation is that the model can be access to the data in source domain, but the origin setting in "continual test-time domain adaptation" i found is that the model only can be access to the data in target domian.
image
This is very confusing to me because in the setting where the model has access to the source domain, there is a significant improvement in the model's ability to overcome catastrophic forgetting and to adapt to the new domain. Can you please explain whether it is appropriate for your paper to be called Continuous Test-time Domain Adaptation.

Can't reproduce the results of ROID on imagenet-c

Awesome job!
But I still have some problems on reproducing the results of ROID on imagenet-c. The original detailed results of ROID on imagenet-c are shown in Table 12 if I understand correctly.
1688312610177
However, when I try to reproduce the experiments by the script:
CUDA_VISIBLE_DEVICES=0 python test_time.py --cfg cfgs/imagenet_c/roid.yaml
the final results are far behind than that in the paper. The log file is attached below.
roid_230702_231218.txt
I don't know what's going wrong. By the way, the results of cifar10c and cifar100c are in consistent with that in the paper.
Can you help find out the reason?
Thanks again for your excellent job!
Best Regards.

Cannot reproduce the results on "correlated" setting for CIFAR-10C

Thank you for your work and for creating this repository for different TTA settings. While trying your code for "roid.yaml" file and "correlated" TTA setting, I cannot reproduce the results from the paper. I tried batch sizes of 64 as well as 200. Below is the output generated for roid:

[23/07/23 21:57:51] [conf.py:  353]: PyTorch Version: torch=1.13.1+cu117, cuda=11.7, cudnn=8500
[23/07/23 21:57:51] [conf.py:  354]: ADACONTRAST:
  ALPHA: 1.0
  BETA: 1.0
  CE_SUP_TYPE: weak_strong
  CE_TYPE: standard
  CONTRAST_TYPE: class_aware
  DIST_TYPE: cosine
  ETA: 1.0
  NUM_NEIGHBORS: 10
  QUEUE_SIZE: 16384
  REFINE_METHOD: nearest_neighbors
BN:
  ALPHA: 0.1
CKPT_DIR: ./ckpt
CKPT_PATH: 
CONTRAST:
  MODE: all
  PROJECTION_DIM: 128
  TEMPERATURE: 0.1
CORRUPTION:
  DATASET: cifar10_c
  NUM_EX: -1
  SEVERITY: [5]
  TYPE: ['gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur', 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog', 'brightness', 'contrast', 'elastic_transform', 'pixelate', 'jpeg_compression']
COTTA:
  AP: 0.92
  RST: 0.01
CUDNN:
  BENCHMARK: True
DATA_DIR: ./data
DESC: 
DETERMINISM: False
EATA:
  D_MARGIN: 0.05
  FISHER_ALPHA: 2000
  NUM_SAMPLES: 2000
GTTA:
  LAMBDA_MIXUP: 0.3333333333333333
  PRETRAIN_STEPS_ADAIN: 20000
  STEPS_ADAIN: 1
  USE_STYLE_TRANSFER: False
LAME:
  AFFINITY: rbf
  FORCE_SYMMETRY: False
  KNN: 5
  SIGMA: 1.0
LOG_DEST: roid_230723_215751.txt
LOG_TIME: 230723_215751
MODEL:
  ADAPTATION: roid
  ARCH: Standard
  EPISODIC: False
  WEIGHTS: IMAGENET1K_V1
M_TEACHER:
  MOMENTUM: 0.999
OPTIM:
  BETA: 0.9
  DAMPENING: 0.0
  LR: 0.001
  METHOD: Adam
  MOMENTUM: 0.9
  NESTEROV: True
  STEPS: 1
  WD: 0.0
RMT:
  LAMBDA_CE_SRC: 1.0
  LAMBDA_CE_TRG: 1.0
  LAMBDA_CONT: 1.0
  NUM_SAMPLES_WARM_UP: 50000
RNG_SEED: 1
ROID:
  MOMENTUM_PROBS: 0.9
  MOMENTUM_SRC: 0.99
  TEMPERATURE: 0.3333333333333333
  USE_CONSISTENCY: True
  USE_PRIOR_CORRECTION: True
  USE_WEIGHTING: True
ROTTA:
  ALPHA: 0.05
  LAMBDA_T: 1.0
  LAMBDA_U: 1.0
  MEMORY_SIZE: 64
  NU: 0.001
  UPDATE_FREQUENCY: 64
SAR:
  RESET_CONSTANT_EM: 0.2
SAVE_DIR: ./output/roid_cifar10_c_230723_215751
SETTING: correlated
SOURCE:
  NUM_WORKERS: 4
  PERCENTAGE: 1.0
TEST:
  ALPHA_DIRICHLET: 0.1
  BATCH_SIZE: 64
  NUM_WORKERS: 4
  N_AUGMENTATIONS: 32
  WINDOW_LENGTH: 1
[23/07/23 21:58:18] [base.py:  141]: #Trainable/total parameters: 17952/36479194 	 Fraction: 0.05% 
[23/07/23 21:58:18] [test_time.py:   45]: Successfully prepared test-time adaptation method: ROID
[23/07/23 21:58:18] [test_time.py:   56]: Using the following domain sequence: ['gaussian_noise', 'shot_noise', 'impulse_noise', 'defocus_blur', 'glass_blur', 'motion_blur', 'zoom_blur', 'snow', 'frost', 'fog', 'brightness', 'contrast', 'elastic_transform', 'pixelate', 'jpeg_compression']
[23/07/23 21:58:18] [test_time.py:   77]: resetting model
[23/07/23 21:58:18] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 21:58:41] [test_time.py:  111]: cifar10_c error % [gaussian_noise5][#samples=10000]: 74.77%
[23/07/23 21:58:41] [test_time.py:   81]: not resetting model
[23/07/23 21:58:41] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 21:58:57] [test_time.py:  111]: cifar10_c error % [shot_noise5][#samples=10000]: 74.09%
[23/07/23 21:58:57] [test_time.py:   81]: not resetting model
[23/07/23 21:58:57] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 21:59:12] [test_time.py:  111]: cifar10_c error % [impulse_noise5][#samples=10000]: 77.68%
[23/07/23 21:59:12] [test_time.py:   81]: not resetting model
[23/07/23 21:59:12] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 21:59:27] [test_time.py:  111]: cifar10_c error % [defocus_blur5][#samples=10000]: 70.83%
[23/07/23 21:59:27] [test_time.py:   81]: not resetting model
[23/07/23 21:59:27] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 21:59:42] [test_time.py:  111]: cifar10_c error % [glass_blur5][#samples=10000]: 78.51%
[23/07/23 21:59:42] [test_time.py:   81]: not resetting model
[23/07/23 21:59:42] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 21:59:57] [test_time.py:  111]: cifar10_c error % [motion_blur5][#samples=10000]: 71.41%
[23/07/23 21:59:57] [test_time.py:   81]: not resetting model
[23/07/23 21:59:57] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:00:13] [test_time.py:  111]: cifar10_c error % [zoom_blur5][#samples=10000]: 70.43%
[23/07/23 22:00:13] [test_time.py:   81]: not resetting model
[23/07/23 22:00:13] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:00:28] [test_time.py:  111]: cifar10_c error % [snow5][#samples=10000]: 73.13%
[23/07/23 22:00:28] [test_time.py:   81]: not resetting model
[23/07/23 22:00:28] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:00:44] [test_time.py:  111]: cifar10_c error % [frost5][#samples=10000]: 71.34%
[23/07/23 22:00:44] [test_time.py:   81]: not resetting model
[23/07/23 22:00:44] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:01:01] [test_time.py:  111]: cifar10_c error % [fog5][#samples=10000]: 70.62%
[23/07/23 22:01:01] [test_time.py:   81]: not resetting model
[23/07/23 22:01:01] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:01:18] [test_time.py:  111]: cifar10_c error % [brightness5][#samples=10000]: 68.15%
[23/07/23 22:01:18] [test_time.py:   81]: not resetting model
[23/07/23 22:01:18] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:01:34] [test_time.py:  111]: cifar10_c error % [contrast5][#samples=10000]: 70.70%
[23/07/23 22:01:34] [test_time.py:   81]: not resetting model
[23/07/23 22:01:34] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:01:50] [test_time.py:  111]: cifar10_c error % [elastic_transform5][#samples=10000]: 75.45%
[23/07/23 22:01:50] [test_time.py:   81]: not resetting model
[23/07/23 22:01:51] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:02:06] [test_time.py:  111]: cifar10_c error % [pixelate5][#samples=10000]: 73.71%
[23/07/23 22:02:06] [test_time.py:   81]: not resetting model
[23/07/23 22:02:06] [data_loading.py:  133]: Using Dirichlet distribution with alpha=0.1 to temporally correlated samples by class labels...
[23/07/23 22:02:22] [test_time.py:  111]: cifar10_c error % [jpeg_compression5][#samples=10000]: 76.69%
[23/07/23 22:02:22] [test_time.py:  114]: mean error: 73.17%, mean error at 5: 73.17%

mixed domains TTA setting

Nice work for universal TTA. But I have a little problem. For ImageNet-C, how to keep per corruption with 50000 samples in the mixed domains TTA setting. By setting "_C.CORRUPTION.NUM_EX = -1", we can only use 75,000 (5000 per corruption *15 )samples.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.