Giter Club home page Giter Club logo

skip-connections-matter's Introduction

Skip Connections Matter

This repository contains the code for Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets (ICLR 2020 Spotlight).

News

  • 11/18/2020 - We released more codes and the dataset used in our paper(a subset with 5000 images from ImageNet), to help reproduce the reported results in our paper.
  • 02/20/2020 - arXiv posted and repository released.

Method

We propose the Skip Gradient Method (SGM) to generate adversarial examples using gradients more from the skip connections rather than the residual modules. In particular, SGM utilizes a decay factor (gamma) to reduce gradients from the residual modules,

Requisite

This code is implemented in PyTorch, and we have tested the code under the following environment settings:

  • python = 3.7.6
  • torch = 1.7.0
  • torchvision = 0.8.1
  • advertorch = 0.2.2
  • pretrainedmodels = 0.7.4

Run the code

  1. Download the dataset from Google Drive or Baidu Drive (pw:55rk), and extract images to the path ./SubImageNet224/
  2. Generate adversarial examples and save them into path ./adv_images/. For ResNet-152 as the source model,
    python attack_sgm.py --gamma 0.2 --output_dir adv_images --arch densenet201 --batch-size 40
    For DenseNet-201 as the source model,
    python attack_sgm.py --gamma 0.5 --output_dir adv_images --arch resnet152 --batch-size 40
  3. Evaluate the transerability of generated adversarial examples in ./adv_images/. For VGG19 with batch norm as the target model
    python evaluate.py --input_dir adv_images --arch vgg19_bn

Results

  • Visualization

  • Reproduced results

We run this code, and the attack success (1 - acc) against VGG19 is close to the repored in our paper:

Source \ Method PGD MI SGM
ResNet-152 45.80% 66.70% 81.04%
DenseNet-201 57.82% 75.38% 82.58%

Implementation

For easier reproduction, we provide more detailed information here.

register backward hook for SGM

In fact, we manipulate gradients flowing through ReLU in utils_sgm, since there is no ReLU in skip-connections:

  • For ResNet, there are "downsampling" modules in which skip-connections are replaced by a conv layer. We do not manipulate gradients of "downsampling" module;

  • For DenseNet, we manipulate gradients in all dense block.

Pretrained models

All pretrained models in our paper can be found online:

Citing this work

@inproceedings{wu2020skip,
    title={Skip connections matter: On the transferability of adversarial examples generated with resnets},
    author={Wu, Dongxian and Wang, Yisen and Xia, Shu-Tao and Bailey, James and Ma, Xingjun},
    booktitle={ICLR},
    year={2020}
}

Reference

[1] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In CVPR, 2018.

[2] Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel. Ensemble Adversarial Training: Attacks and Defenses. In ICLR, 2018.

skip-connections-matter's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

skip-connections-matter's Issues

RuntimeError: hook 'backward_hook_norm' has returned an incorrect number of values (got 1, but expected 3)

Hello, Dongxian. Hope everything is going well.

Your paper is quite interesting.

I ran it for ResNet 18 and 50 but it gave me an error every time.

I don't know if it is because of Pytorch version (1.7 in my python environment) or not. The error happens in line "inputs_adv = adversary.perturb(inputs, pred.detach().view(-1))."

def backward_hook_norm(module, grad_in, grad_out):
# normalize the gradient to avoid gradient explosion or vanish
std = torch.std(grad_in[0])
return (grad_in[0] / std,)

If I remove the line that runs hook for BatchNorm, everything is fine. But I don't know if it will affect the attack performance a lot. Looks like transferability is not good currently after I remove it.

Please tell me if you have any idea how to fix it.

Thank you.

Question about image/perturbation resizing

Thank you for the good work :) Have a quick question.

For the experiment of using RN152 as source model and IncV3 as target model. Can I ask how did you do the resizing step from 224x224 to 299x299? If you resized the 224x224 perturbation to 299x299, what algorithm did you use for upsampling?(i.e. nearest, linear, or bilinear?)

Performance on Pre-Activation ResNet

Hello,

Thank you so much for providing your code!

I had a question about pre-activation ResNet. I am trying to implement your attack on my problem using pre-act resNet18 and CIFAR-10, but the transferability is lower than the plain vanilla attack (or at most similar). The source is a pre-act resNet18 and also the target is a pre-act resNet18. The only thing that I did different from your code is to adjust the ReLU condition (since pre-act resnet doesn't have a ReLU before the residual blocks, so I removed the "not '0.relu' in name" condition).

I would very much appreciate it if you could let me know if you have tried this architecture, or if you have some insights about what could be going wrong (e.g. the source and target models are the same or the source model is not deep enough). Please let me know if you need more info about the architecture.

Thank you very much.

关于对抗训练

您好!我看到您在实验中使用对抗训练的模型?请问您方便提供pytorch对抗训练的代码和模型权重吗?看到您的代码使用tf写的。

Wondering the performance of SGM on CIFAR-10

Hi, thanks for making the code available!

I'm just wondering if you have tested SGM on CIFAR-10 by any chance? And if so, does SGM still expose stronger transferability? Thanks!

ensemble attack source code?

Hi, thanks for publishing the code of this paper?

Just wonder would it be possible for you to release the ensemble attack part code?

Or maybe some reference to other people's code?

Code for tensorflow version?

Hi, this is a insteresting work!
I find that the repo is torch version. Can you offer me the Tensorflow code of SGM, thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.