Giter Club home page Giter Club logo

swin-transformer's Introduction

Swin Transformer

PWC PWC PWC PWC

This repo is the official implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" as well as the follow-ups. It currently includes code and models for the following tasks:

Image Classification: Included in this repo. See get_started.md for a quick start.

Object Detection and Instance Segmentation: See Swin Transformer for Object Detection.

Semantic Segmentation: See Swin Transformer for Semantic Segmentation.

Video Action Recognition: See Video Swin Transformer.

Semi-Supervised Object Detection: See Soft Teacher.

SSL: Contrasitive Learning: See Transformer-SSL.

SSL: Masked Image Modeling: See get_started.md#simmim-support.

Mixture-of-Experts: See get_started for more instructions.

Feature-Distillation: See Feature-Distillation.

Updates

12/29/2022

  1. Nvidia's FasterTransformer now supports Swin Transformer V2 inference, which have significant speed improvements on T4 and A100 GPUs.

11/30/2022

  1. Models and codes of Feature Distillation are released. Please refer to Feature-Distillation for details, and the checkpoints (FD-EsViT-Swin-B, FD-DeiT-ViT-B, FD-DINO-ViT-B, FD-CLIP-ViT-B, FD-CLIP-ViT-L).

09/24/2022

  1. Merged SimMIM, which is a Masked Image Modeling based pre-training approach applicable to Swin and SwinV2 (and also applicable for ViT and ResNet). Please refer to get started with SimMIM to play with SimMIM pre-training.

  2. Released a series of Swin and SwinV2 models pre-trained using the SimMIM approach (see MODELHUB for SimMIM), with model size ranging from SwinV2-Small-50M to SwinV2-giant-1B, data size ranging from ImageNet-1K-10% to ImageNet-22K, and iterations from 125k to 500k. You may leverage these models to study the properties of MIM methods. Please look into the data scaling paper for more details.

07/09/2022

News:

  1. SwinV2-G achieves 61.4 mIoU on ADE20K semantic segmentation (+1.5 mIoU over the previous SwinV2-G model), using an additional feature distillation (FD) approach, setting a new recrod on this benchmark. FD is an approach that can generally improve the fine-tuning performance of various pre-trained models, including DeiT, DINO, and CLIP. Particularly, it improves CLIP pre-trained ViT-L by +1.6% to reach 89.0% on ImageNet-1K image classification, which is the most accurate ViT-L model.
  2. Merged a PR from Nvidia that links to faster Swin Transformer inference that have significant speed improvements on T4 and A100 GPUs.
  3. Merged a PR from Nvidia that enables an option to use pure FP16 (Apex O2) in training, while almost maintaining the accuracy.

06/03/2022

  1. Added Swin-MoE, the Mixture-of-Experts variant of Swin Transformer implemented using Tutel (an optimized Mixture-of-Experts implementation). Swin-MoE is introduced in the TuTel paper.

05/12/2022

  1. Pretrained models of Swin Transformer V2 on ImageNet-1K and ImageNet-22K are released.
  2. ImageNet-22K pretrained models for Swin-V1-Tiny and Swin-V2-Small are released.

03/02/2022

  1. Swin Transformer V2 and SimMIM got accepted by CVPR 2022. SimMIM is a self-supervised pre-training approach based on masked image modeling, a key technique that works out the 3-billion-parameter Swin V2 model using 40x less labelled data than that of previous billion-scale models based on JFT-3B.

02/09/2022

  1. Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo Hugging Face Spaces

10/12/2021

  1. Swin Transformer received ICCV 2021 best paper award (Marr Prize).

08/09/2021

  1. Soft Teacher will appear at ICCV2021. The code will be released at GitHub Repo. Soft Teacher is an end-to-end semi-supervisd object detection method, achieving a new record on the COCO test-dev: 61.3 box AP and 53.0 mask AP.

07/03/2021

  1. Add Swin MLP, which is an adaption of Swin Transformer by replacing all multi-head self-attention (MHSA) blocks by MLP layers (more precisely it is a group linear layer). The shifted window configuration can also significantly improve the performance of vanilla MLP architectures.

06/25/2021

  1. Video Swin Transformer is released at Video-Swin-Transformer. Video Swin Transformer achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including action recognition (84.9 top-1 accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1 accuracy on Something-Something v2).

05/12/2021

  1. Used as a backbone for Self-Supervised Learning: Transformer-SSL

Using Swin-Transformer as the backbone for self-supervised learning enables us to evaluate the transferring performance of the learnt representations on down-stream tasks, which is missing in previous works due to the use of ViT/DeiT, which has not been well tamed for down-stream tasks.

04/12/2021

Initial commits:

  1. Pretrained models on ImageNet-1K (Swin-T-IN1K, Swin-S-IN1K, Swin-B-IN1K) and ImageNet-22K (Swin-B-IN22K, Swin-L-IN22K) are provided.
  2. The supported code and models for ImageNet-1K image classification, COCO object detection and ADE20K semantic segmentation are provided.
  3. The cuda kernel implementation for the local relation layer is provided in branch LR-Net.

Introduction

Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection.

Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 mask AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on val), surpassing previous models by a large margin.

teaser

Main Results on ImageNet with Pretrained Models

ImageNet-1K and ImageNet-22K Pretrained Swin-V1 Models

name pretrain resolution acc@1 acc@5 #params FLOPs FPS 22K model 1K model
Swin-T ImageNet-1K 224x224 81.2 95.5 28M 4.5G 755 - github/baidu/config/log
Swin-S ImageNet-1K 224x224 83.2 96.2 50M 8.7G 437 - github/baidu/config/log
Swin-B ImageNet-1K 224x224 83.5 96.5 88M 15.4G 278 - github/baidu/config/log
Swin-B ImageNet-1K 384x384 84.5 97.0 88M 47.1G 85 - github/baidu/config
Swin-T ImageNet-22K 224x224 80.9 96.0 28M 4.5G 755 github/baidu/config github/baidu/config
Swin-S ImageNet-22K 224x224 83.2 97.0 50M 8.7G 437 github/baidu/config github/baidu/config
Swin-B ImageNet-22K 224x224 85.2 97.5 88M 15.4G 278 github/baidu/config github/baidu/config
Swin-B ImageNet-22K 384x384 86.4 98.0 88M 47.1G 85 github/baidu github/baidu/config
Swin-L ImageNet-22K 224x224 86.3 97.9 197M 34.5G 141 github/baidu/config github/baidu/config
Swin-L ImageNet-22K 384x384 87.3 98.2 197M 103.9G 42 github/baidu github/baidu/config

ImageNet-1K and ImageNet-22K Pretrained Swin-V2 Models

name pretrain resolution window acc@1 acc@5 #params FLOPs FPS 22K model 1K model
SwinV2-T ImageNet-1K 256x256 8x8 81.8 95.9 28M 5.9G 572 - github/baidu/config
SwinV2-S ImageNet-1K 256x256 8x8 83.7 96.6 50M 11.5G 327 - github/baidu/config
SwinV2-B ImageNet-1K 256x256 8x8 84.2 96.9 88M 20.3G 217 - github/baidu/config
SwinV2-T ImageNet-1K 256x256 16x16 82.8 96.2 28M 6.6G 437 - github/baidu/config
SwinV2-S ImageNet-1K 256x256 16x16 84.1 96.8 50M 12.6G 257 - github/baidu/config
SwinV2-B ImageNet-1K 256x256 16x16 84.6 97.0 88M 21.8G 174 - github/baidu/config
SwinV2-B* ImageNet-22K 256x256 16x16 86.2 97.9 88M 21.8G 174 github/baidu/config github/baidu/config
SwinV2-B* ImageNet-22K 384x384 24x24 87.1 98.2 88M 54.7G 57 github/baidu/config github/baidu/config
SwinV2-L* ImageNet-22K 256x256 16x16 86.9 98.0 197M 47.5G 95 github/baidu/config github/baidu/config
SwinV2-L* ImageNet-22K 384x384 24x24 87.6 98.3 197M 115.4G 33 github/baidu/config github/baidu/config

Note:

  • SwinV2-B* (SwinV2-L*) with input resolution of 256x256 and 384x384 both fine-tuned from the same pre-training model using a smaller input resolution of 192x192.
  • SwinV2-B* (384x384) achieves 78.08 acc@1 on ImageNet-1K-V2 while SwinV2-L* (384x384) achieves 78.31.

ImageNet-1K Pretrained Swin MLP Models

name pretrain resolution acc@1 acc@5 #params FLOPs FPS 1K model
Mixer-B/16 ImageNet-1K 224x224 76.4 - 59M 12.7G - official repo
ResMLP-S24 ImageNet-1K 224x224 79.4 - 30M 6.0G 715 timm
ResMLP-B24 ImageNet-1K 224x224 81.0 - 116M 23.0G 231 timm
Swin-T/C24 ImageNet-1K 256x256 81.6 95.7 28M 5.9G 563 github/baidu/config
SwinMLP-T/C24 ImageNet-1K 256x256 79.4 94.6 20M 4.0G 807 github/baidu/config
SwinMLP-T/C12 ImageNet-1K 256x256 79.6 94.7 21M 4.0G 792 github/baidu/config
SwinMLP-T/C6 ImageNet-1K 256x256 79.7 94.9 23M 4.0G 766 github/baidu/config
SwinMLP-B ImageNet-1K 224x224 81.3 95.3 61M 10.4G 409 github/baidu/config

Note: access code for baidu is swin. C24 means each head has 24 channels.

ImageNet-22K Pretrained Swin-MoE Models

  • Please refer to get_started for instructions on running Swin-MoE.
  • Pretrained models for Swin-MoE can be found in MODEL HUB

Main Results on Downstream Tasks

COCO Object Detection (2017 val)

Backbone Method pretrain Lr Schd box mAP mask mAP #params FLOPs
Swin-T Mask R-CNN ImageNet-1K 3x 46.0 41.6 48M 267G
Swin-S Mask R-CNN ImageNet-1K 3x 48.5 43.3 69M 359G
Swin-T Cascade Mask R-CNN ImageNet-1K 3x 50.4 43.7 86M 745G
Swin-S Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 107M 838G
Swin-B Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 145M 982G
Swin-T RepPoints V2 ImageNet-1K 3x 50.0 - 45M 283G
Swin-T Mask RepPoints V2 ImageNet-1K 3x 50.3 43.6 47M 292G
Swin-B HTC++ ImageNet-22K 6x 56.4 49.1 160M 1043G
Swin-L HTC++ ImageNet-22K 3x 57.1 49.5 284M 1470G
Swin-L HTC++* ImageNet-22K 3x 58.0 50.4 284M -

Note: * indicates multi-scale testing.

ADE20K Semantic Segmentation (val)

Backbone Method pretrain Crop Size Lr Schd mIoU mIoU (ms+flip) #params FLOPs
Swin-T UPerNet ImageNet-1K 512x512 160K 44.51 45.81 60M 945G
Swin-S UperNet ImageNet-1K 512x512 160K 47.64 49.47 81M 1038G
Swin-B UperNet ImageNet-1K 512x512 160K 48.13 49.72 121M 1188G
Swin-B UPerNet ImageNet-22K 640x640 160K 50.04 51.66 121M 1841G
Swin-L UperNet ImageNet-22K 640x640 160K 52.05 53.53 234M 3230G

Citing Swin Transformer

@inproceedings{liu2021Swin,
  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

Citing Local Relation Networks (the first full-attention visual backbone)

@inproceedings{hu2019local,
  title={Local Relation Networks for Image Recognition},
  author={Hu, Han and Zhang, Zheng and Xie, Zhenda and Lin, Stephen},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  pages={3464--3473},
  year={2019}
}

Citing Swin Transformer V2

@inproceedings{liu2021swinv2,
  title={Swin Transformer V2: Scaling Up Capacity and Resolution}, 
  author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Citing SimMIM (a self-supervised approach that enables SwinV2-G)

@inproceedings{xie2021simmim,
  title={SimMIM: A Simple Framework for Masked Image Modeling},
  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Bao, Jianmin and Yao, Zhuliang and Dai, Qi and Hu, Han},
  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Citing SimMIM-data-scaling

@article{xie2022data,
  title={On Data Scaling in Masked Image Modeling},
  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Wei, Yixuan and Dai, Qi and Hu, Han},
  journal={arXiv preprint arXiv:2206.04664},
  year={2022}
}

Citing Swin-MoE

@misc{hwang2022tutel,
      title={Tutel: Adaptive Mixture-of-Experts at Scale}, 
      author={Changho Hwang and Wei Cui and Yifan Xiong and Ziyue Yang and Ze Liu and Han Hu and Zilong Wang and Rafael Salas and Jithin Jose and Prabhat Ram and Joe Chau and Peng Cheng and Fan Yang and Mao Yang and Yongqiang Xiong},
      year={2022},
      eprint={2206.03382},
      archivePrefix={arXiv}
}

Getting Started

Third-party Usage and Experiments

In this pargraph, we cross link third-party repositories which use Swin and report results. You can let us know by raising an issue

(Note please report accuracy numbers and provide trained models in your new repository to facilitate others to get sense of correctness and model behavior)

[12/29/2022] Swin Transformers (V2) inference implemented in FasterTransformer: FasterTransformer

[06/30/2022] Swin Transformers (V1) inference implemented in FasterTransformer: FasterTransformer

[05/12/2022] Swin Transformers (V1) implemented in TensorFlow with the pre-trained parameters ported into them. Find the implementation, TensorFlow weights, code example here in this repository.

[04/06/2022] Swin Transformer for Audio Classification: Hierarchical Token Semantic Audio Transformer.

[12/21/2021] Swin Transformer for StyleGAN: StyleSwin

[12/13/2021] Swin Transformer for Face Recognition: FaceX-Zoo

[08/29/2021] Swin Transformer for Image Restoration: SwinIR

[08/12/2021] Swin Transformer for person reID: https://github.com/layumi/Person_reID_baseline_pytorch

[06/29/2021] Swin-Transformer in PaddleClas and inference based on whl package: https://github.com/PaddlePaddle/PaddleClas

[04/14/2021] Swin for RetinaNet in Detectron: https://github.com/xiaohu2015/SwinT_detectron2.

[04/16/2021] Included in a famous model zoo: https://github.com/rwightman/pytorch-image-models.

[04/20/2021] Swin-Transformer classifier inference using TorchServe: https://github.com/kamalkraj/Swin-Transformer-Serve

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

swin-transformer's People

Contributors

ak391 avatar ancientmooner avatar bestjuly avatar caoyue10 avatar jackch-nv avatar kamalkraj avatar littletomatodonkey avatar microsoftopensource avatar retrocirce avatar sayakpaul avatar stupidzz avatar zdaxie avatar zeliu98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swin-transformer's Issues

The window_size setting of model?

is it still 7 in 384*384 size of imagenet or in ade20k?
That may raise errors?so how about the setting policy?
Or I just misuderstanding the paper.

output = model(image)

output = model(image)
#output.shape=torch.Size([1, 1000])
output.topk(5,dim=1)#add the code behind

I use the weight file is swin_tiny_patch4_window7_224.pth

image

I use a bird image(above) as the model input .
The result of output.topk is
torch.return_types.topk(
values=tensor([[1.7864, 1.6988, 1.6899, 1.6638, 1.5892]]),
indices=tensor([[627, 468, 450, 41, 160]]))

I want to know the Imagenet output tensor index to label mapping.

How to decide the learning rate for a certain experiment?

Hi, I noted that for the 4 object detection frameworks in your paper, you use the same lr setting: AdamW with lr=0.0001. But the base lr settings for them are different.
Cascade mask r-cnn: SGD with lr=0.02
ATSS: SGD with lr=0.01
RepPoints v2: SGD with lr=0.01
Sparse RCNN: AdamW with lr=0.000025
Leave the optimizer type alone, how do you decide the lr when using swin-tranformer as the backbone for these 4 frameworks? Seems that your lr has nothing to do with their original ones. This puzzled me. From my point, lr should be adjusted according to the network structure and the loss formation. But you just use the same setting. How to explain this? Any advice, thanks.

CalledProcessError & RuntimeError, how to solve it?

When I run the command

python -m torch.distributed.launch --nproc_per_node 1 --master_port 12345 main.py --eval --cfg configs/swin_base_patch4_window7_224.yaml --resume swin_base_patch4_window7_224.pth --data-path ../imagenet --amp-opt-level O0

I get the following error
捕获
Could anyone tell me what's wrong with that? How to solve this problem? I newly created a conda virtual environment swin and installed everything as the get_started.md said

naive/kernel sliding windows

Hi, I wonder that what's the naive/kernel sliding window. The paper didn't provide much description of them.

Why the speeding-up is so obvious?

Looking forward to your reply!
Thanks

Error!!

ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set

APEX Gradient overflow

When I use the O1 train the swin-net, but
Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 131072.0 [2021-04-25 05:35:19 swin_base_patch4_window7_224](main_prune.py 310): INFO Train: [0/300][4050/5004] eta 0:17:06 lr 0.000500 time 1.1737 (1.0765) loss 3.2572 (3.3279) grad_norm 1.0323 (nan) mem 4814MB

Is this normal?

large resolution pretrain or fine-tune using resolution

hi,thanks for your project! i have some questions.
Does the structure of swin transformer limit its ability to fine-tune with a larger resolution? For example, similar to DeiT, it can use a lower training resolution and fine-tune the network at the larger resolution by interpolating the positional encoding. Moreover,Is there a larger resolution pretrain model on imagenet22k(e.g 448)

How to finetune on larger input image size?

If I pre-train Swin-T for 224 input image size. How can I finetune it and get Swin-T for 320 input image size?
In your paper, you claimed 384^2 input models are obtained by fine-tuning:

For other resolutions such as 384^2, we fine-tune the models trained at 224^2 resolution, instead of training from scratch, to reduce GPU consumption.

However, in this implementation, the fine-tuning is hard to do because the existence of parameters: relative_position_bias_table and attn_mask. If I change the input size, these two parameters are also changed.

So how can I modify the code to support fine-tuning on larger input image size?
Thanks!!

Model inference speed

Hello, thank you for sharing. When testing the model inference speed, I used your swin_tiny_patch4_window7_224.pth, but the inference time of my 224x224 tensor input on V100 is about 15ms, which is a bit different from the speed of 755.2/s in the paper.
image
image
image
May I ask where I deal with the problem?
image

about gpu

hi!Which and how many gpus did you use when you train each(Swin-T,S,B,L) model?

Eval acc is 0, when I set `--amp-opt-level` as `O0`

Hi, i download the models from get_start.md and want to eval on ImageNet1k.

  • The eval acc is 0, which is as follows.
[2021-04-13 18:15:43 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [0/98]	Time 5.697 (5.697)	Loss 9.3819 (9.3819)	Acc@1 0.000 (0.000)	Acc@5 0.586 (0.586)	Mem 2502MB
[2021-04-13 18:15:47 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [10/98]	Time 0.285 (0.893)	Loss 9.3991 (9.4262)	Acc@1 0.000 (0.018)	Acc@5 0.391 (0.178)	Mem 2503MB
[2021-04-13 18:15:50 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [20/98]	Time 0.554 (0.638)	Loss 9.4262 (9.4286)	Acc@1 0.195 (0.028)	Acc@5 0.391 (0.270)	Mem 2503MB
[2021-04-13 18:15:53 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [30/98]	Time 0.472 (0.535)	Loss 9.3771 (9.4292)	Acc@1 0.195 (0.063)	Acc@5 0.391 (0.290)	Mem 2503MB
[2021-04-13 18:15:57 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [40/98]	Time 0.507 (0.490)	Loss 9.4310 (9.4236)	Acc@1 0.195 (0.067)	Acc@5 0.586 (0.286)	Mem 2503MB
[2021-04-13 18:16:00 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [50/98]	Time 0.299 (0.458)	Loss 9.4321 (9.4172)	Acc@1 0.000 (0.092)	Acc@5 0.195 (0.341)	Mem 2503MB
[2021-04-13 18:16:03 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [60/98]	Time 0.197 (0.436)	Loss 9.4335 (9.4172)	Acc@1 0.195 (0.090)	Acc@5 0.195 (0.336)	Mem 2503MB
[2021-04-13 18:16:07 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [70/98]	Time 0.235 (0.420)	Loss 9.4177 (9.4207)	Acc@1 0.000 (0.091)	Acc@5 0.391 (0.322)	Mem 2503MB
[2021-04-13 18:16:10 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [80/98]	Time 0.240 (0.408)	Loss 9.3358 (9.4199)	Acc@1 0.586 (0.096)	Acc@5 0.586 (0.323)	Mem 2503MB
[2021-04-13 18:16:13 swin_tiny_patch4_window7_224](main.py 266): INFO Test: [90/98]	Time 0.232 (0.396)	Loss 9.3683 (9.4161)	Acc@1 0.195 (0.097)	Acc@5 0.391 (0.324)	Mem 2503MB
[2021-04-13 18:16:15 swin_tiny_patch4_window7_224](main.py 272): INFO  * Acc@1 0.096 Acc@5 0.318
[2021-04-13 18:16:15 swin_tiny_patch4_window7_224](main.py 121): INFO Accuracy of the network on the 50000 test images: 0.1%
  • The shell is as follows.
python3.7 -m torch.distributed.launch \
    --nproc_per_node 4 \
    --master_port 12345 \
    main.py \
        --eval \
        --cfg="configs/swin_tiny_patch4_window7_224.yaml"  \
        --resume="./swin_tiny_patch4_window7_224.pth" \
        --data-path="/data/ILSVRC2012"
  • The only diff is

image

  • Could you please help me see why it does not work? thanks!

Datasets

How can I get the following dataset?

data
└── ImageNet-Zip
├── train_map.txt
├── train.zip
├── val_map.txt
└── val.zip

We are now using the standard folder dataset, the speech of which is slow, about 2 hours/ 1 epoch.

Minor discrepency between training log reported accuracy and evaluation accuracy on ImageNet.

Thanks for releasing the code!

However, we use your codebase for training and testing some models and evaluate their performance on ImageNet1K(w/o pretrained on ImageNet22K). However, according to the training log, the max accuracy is 78.58. But when we evaluate the model with evaluation mode by resuming the best-performed checkpoint, the accuracy becomes 78.7. This discrepancy between training log reported accuracy and evaluation accuracy happens on many models.

Here I provide our evaluation script. Would you please let us know why?

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch \
                                    --nproc_per_node=1 \
                                    --master_port 12345 \
                                    main.py \
                                    --eval \
                                    --cfg configs/our_model.yaml \
                                    --batch-size 2 \
                                    --resume 'output/our_model/best_ckpt.pth' \
                                    --data-path "path/to/dataset" \
                                    --zip 

Many thanks!

The Question about the mask of window attention

Nice work!And i reading your code recently. But i cannot understand well about the implementation of the mask in shifted window attention.

I simply draw a picture like below. The red mean the mask, and i choose windowsize as 2, shiftsize as 1.

I think the mask should be like this
image
but i use your code to generate mask like this:

import torch
import torch.nn as nn


def window_partition(x, window_size):
    """
    Args:
        x: (B, H, W, C)
        window_size (int): window size

    Returns:
        windows: (num_windows*B, window_size, window_size, C)
    """
    B, H, W, C = x.shape
    x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
    windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
    return windows


window_size = 2
shift_size = 1
H, W = 4, 4
img_mask = torch.zeros((1, H, W, 1))  # 1 H W 1
h_slices = (slice(0, -window_size),
            slice(-window_size, -shift_size),
            slice(-shift_size, None))
w_slices = (slice(0, -window_size),
            slice(-window_size, -shift_size),
            slice(-shift_size, None))

cnt = 0
for h in h_slices:
for w in w_slices:
img_mask[:, h, w, :] = cnt
cnt += 1

mask_windows = window_partition(img_mask, window_size)  # nW, window_size, window_size, 1
mask_windows = mask_windows.view(-1, window_size * window_size)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
attn_mask = attn_mask.unsqueeze(1).unsqueeze(0)

"""
tensor([[[[[   0.,    0.,    0.,    0.],
           [   0.,    0.,    0.,    0.],
           [   0.,    0.,    0.,    0.],
           [   0.,    0.,    0.,    0.]]],


         [[[   0., -100.,    0., -100.],
           [-100.,    0., -100.,    0.],
           [   0., -100.,    0., -100.],
           [-100.,    0., -100.,    0.]]],


         [[[   0.,    0., -100., -100.],
           [   0.,    0., -100., -100.],
           [-100., -100.,    0.,    0.],
           [-100., -100.,    0.,    0.]]],


         [[[   0., -100., -100., -100.],
           [-100.,    0., -100., -100.],
           [-100., -100.,    0., -100.],
           [-100., -100., -100.,    0.]]]]])
"""

I cannot understand it, can you give me a favor?

Grad Norm go to inf while training still goes on

[2021-04-25 21:57:25 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][370/1251] eta 0:08:57 lr 0.000166 time 0.6068 (0.6098) loss 6.3568 (6.1324) grad_norm 2.0382 (inf) mem 11818MB
[2021-04-25 21:57:31 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][380/1251] eta 0:08:51 lr 0.000166 time 0.6025 (0.6097) loss 5.5428 (6.1310) grad_norm 2.3143 (inf) mem 11818MB
[2021-04-25 21:57:37 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][390/1251] eta 0:08:44 lr 0.000166 time 0.6019 (0.6095) loss 5.6160 (6.1325) grad_norm 2.1987 (inf) mem 11818MB
[2021-04-25 21:57:43 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][400/1251] eta 0:08:38 lr 0.000167 time 0.6021 (0.6094) loss 5.8623 (6.1306) grad_norm 2.3630 (inf) mem 11818MB
[2021-04-25 21:57:49 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][410/1251] eta 0:08:32 lr 0.000167 time 0.6053 (0.6092) loss 6.3662 (6.1315) grad_norm 1.8832 (inf) mem 11818MB
[2021-04-25 21:57:55 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][420/1251] eta 0:08:26 lr 0.000168 time 0.6034 (0.6091) loss 6.3618 (6.1325) grad_norm 2.0002 (inf) mem 11818MB
[2021-04-25 21:58:01 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][430/1251] eta 0:08:19 lr 0.000168 time 0.6029 (0.6090) loss 6.3323 (6.1320) grad_norm 2.1190 (inf) mem 11818MB
[2021-04-25 21:58:07 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][440/1251] eta 0:08:13 lr 0.000168 time 0.6033 (0.6089) loss 5.6586 (6.1266) grad_norm 1.6829 (inf) mem 11818MB
[2021-04-25 21:58:13 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][450/1251] eta 0:08:07 lr 0.000169 time 0.5933 (0.6088) loss 5.9927 (6.1264) grad_norm 2.2850 (inf) mem 11818MB
[2021-04-25 21:58:19 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][460/1251] eta 0:08:01 lr 0.000169 time 0.6342 (0.6087) loss 6.3210 (6.1273) grad_norm 1.8936 (inf) mem 11818MB
[2021-04-25 21:58:25 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][470/1251] eta 0:07:55 lr 0.000170 time 0.6032 (0.6086) loss 5.5658 (6.1270) grad_norm 1.9723 (inf) mem 11818MB
[2021-04-25 21:58:31 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][480/1251] eta 0:07:49 lr 0.000170 time 0.6028 (0.6085) loss 6.1317 (6.1250) grad_norm 1.9728 (inf) mem 11818MB
[2021-04-25 21:58:38 swin_tiny_patch4_window7_224](main.py 221): INFO Train: [3/300][490/1251] eta 0:07:42 lr 0.000170 time 0.6038 (0.6084) loss 6.2407 (6.1258) grad_norm 1.9203 (inf) mem 11818MB

Here is a snapshot of logs. Why grad_norm goes to infinity while training still goes on?

Best Wishes

Where to download CityscapesDataset

Dear author:
when I run the train.py:
errors happened:
FileNotFoundError:CityscapesDataset:No such file or directory:'data/cityscapes/leftImg8bit/train'

Training interrupted when training custom model?

I was using swin-tiny as the backbone of my model, but I encountered this error:

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).

I did not modify any parameter after the model was wrapped by DDP. Do you know why?

an issue in image classification code --swin_transformer

image

        mask_windows = window_partition(img_mask, self.window_size)  # nW, window_size, window_size, 1
        mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
        attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)                                              # i think its error here !!!!!!!!!!!
        attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))

attn_mask = mask_windows.unsqueeze(2) - mask_windows.unsqueeze(1)

Questions about position bias

Why the relative position bias in different blocks are not shared, and even the position bias for different heads in same block are different?

suppport non-square window_size

Hi.Could it support non-square window_size?
I think it is useful when W is much larger than H. For example, the input size is [224, 224x8]

Accuracy curve for each epoch

Thanks for providing the excellent work. I am trying to reproduce the results of Swin-T model. Could you please provide a accuracy curve w.r.t epochs, which can serve as a reference for me to validate the correctness of codes and experiments.

Thanks a lot!

Some problem about relative Position Embedding

Hello. I find the implementation of relative PE is different from the description in the paper "Self-Attention with Relative Position Representations". So, could you introduce a little more about how you do to implement it please? Thanks a lot.

error about apex.amp (install apex)

I used the "pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./" to install apex.

when I "from apex import amp", the error is "ModuleNotFoundError: No module named 'amp'".

My solution is "pip install -v --no-cache-dir ./", then everything is OK !!!!

train and infer on different input sizes

Hi,
thanks for releasing this awesome repo.

I wanted to know if it is possible to train on input with a certain resolution and then run inference on inputs with variable resolutions (smaller\larger than the resolution used for training).
just to clarify - I mean without resizing\croping the input to the training resolution during inference.

thanks

Inference code

I am wondering if you could provide the inference code for any given images or image folder? I can run model on any given images, but it is not possible to know the class labels without downloading imagenet

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.