Giter Club home page Giter Club logo

backdoorbox's Introduction

BackdoorBox: An Open-sourced Python Toolbox for Backdoor Attacks and Defenses

Python 3.8 Pytorch 1.8.0 torchvision 0.9.0 CUDA 11.1 License GPL

Backdoor attacks are emerging yet critical threats in the training process of deep neural networks (DNNs), where the adversary intends to embed specific hidden backdoor into the models. The attacked DNNs will behave normally in predicting benign samples, whereas the predictions will be maliciously changed whenever the adversary-specified trigger patterns appear. Currently, there were many existing backdoor attacks and defenses. Although most of them were open-sourced, there is still no toolbox that can easily and flexibly implement and compare them simultaneously.

BackdoorBox is an open-sourced Python toolbox, aiming to implement representative and advanced backdoor attacks and defenses under a unified framework that can be used in a flexible manner. We will keep updating this toolbox to track the latest backdoor attacks and defenses.

Currently, this toolbox is still under development (but the attack parts are almost done) and there is no user manual yet. However, users can easily implement our provided methods by referring to the tests sub-folder to see the example codes of each implemented method. Please refer to our paper for more details! In particular, you are always welcome to contribute your backdoor attacks or defenses by pull requests!

Toolbox Characteristics

  • Consistency: Instead of directly collecting and combining the original codes from each method, we re-implement all methods in a unified manner. Specifically, variables having the same function have a consistent name. Similar methods inherit the same base class for further development, have a unified workflow, and have the same core sub-functions (e.g., get_model()).
  • Simplicity: We provide code examples for each implemented backdoor attack and defense to explain how to use them, the definitions and default settings of all required attributes, and the necessary code comments. Users can easily implement and develop our toolbox.
  • Flexibility: We allow users to easily obtain important intermediate outputs and components of each method (e.g., poisoned dataset and attacked/repaired model), use their local samples and model structure for attacks and defenses, and interact with their local codes. The attack and defense modules can be used jointly or separately. You can also use your local dataset via torchvision.datasets.DatasetFolder. (See examples of using the GTSRB dataset)
  • Co-development: All codes and developments are hosted on Github to facilitate collaboration. Currently, there are more than seven contributors have helped develop the code base and others have contributed to the code test. This developing paradigm facilitates rapid and comprehensive development and bug finding.

Backdoor Attacks

Method Source Key Properties Additional Notes
BadNets Badnets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access, 2019. poison-only first backdoor attack
Blended Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv, 2017. poison-only, invisible first invisible attack
Refool (simplified version) Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks. ECCV, 2020. poison-only, sample-specific first stealthy attack with visible yet natural trigger
LabelConsistent Label-Consistent Backdoor Attacks. arXiv, 2019. poison-only, invisible, clean-label first clean-label backdoor attack
TUAP Clean-Label Backdoor Attacks on Video Recognition Models. CVPR, 2020. poison-only, invisible, clean-label first clean-label backdoor attack with optimized trigger pattern
SleeperAgent Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch. NeurIPS, 2022. poison-only, invisible, clean-label effective clean-label backdoor attack
ISSBA Invisible Backdoor Attack with Sample-Specific Triggers. ICCV, 2021. poison-only, sample-specific, physical first poison-only sample-specific attack
WaNet WaNet - Imperceptible Warping-based Backdoor Attack. ICLR, 2021. poison-only, invisible, sample-specific
Blind (blended-based) Blind Backdoors in Deep Learning Models. USENIX Security, 2021. training-controlled first training-controlled attack targeting loss computation
IAD Input-Aware Dynamic Backdoor Attack. NeurIPS, 2020. training-controlled, optimized, sample-specific first training-controlled sample-specific attack
PhysicalBA Backdoor Attack in the Physical World. ICLR Workshop, 2021. training-controlled, physical first physical backdoor attack
LIRA LIRA: Learnable, Imperceptible and Robust Backdoor Attacks. ICCV, 2021. training-controlled, invisible, optimized, sample-specific
BATT BATT: Backdoor Attack with Transformation-based Triggers. ICASSP, 2023. poison-only, invisible, physical

Note: For the convenience of users, all our implemented attacks support obtaining poisoned dataset (via .get_poisoned_dataset()), obtaining infected model (via .get_model()), and training with your own local samples (loaded via torchvision.datasets.DatasetFolder). Please refer to base.py and the attack's codes for more details.

Backdoor Defenses

Method Source Defense Type Additional Notes
AutoEncoderDefense Neural Trojans. ICCD, 2017. Sample Pre-processing first pre-processing-based defense
ShrinkPad Backdoor Attack in the Physical World. ICLR Workshop, 2021. Sample Pre-processing efficient defense
FineTuning Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. RAID, 2018. Model Repairing first defense based on model repairing
Pruning Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. RAID, 2018. Model Repairing
MCR Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness. ICLR, 2020. Model Repairing
NAD Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks. ICLR, 2021. Model Repairing first distillation-based defense
ABL Anti-Backdoor Learning: Training Clean Models on Poisoned Data. NeurIPS, 2021. Poison Suppression
SCALE-UP SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency. ICLR, 2023. Input-level Backdoor Detection black-box online detection
IBD-PSC IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency. ICML, 2024. Input-level Backdoor Detection simple yet effective, safeguarded by theoretical analysis

Methods Under Development

  • DBD
  • SS
  • Neural Cleanse
  • DP
  • CutMix
  • AEVA
  • STRIP

Attack & Defense Benchmark

The benchmark is coming soon.

Contributors

Organization Contributors
Tsinghua University Yiming Li, Mengxi Ya, Guanhao Gan, Kuofeng Gao, Xin Yan, Jia Xu, Tong Xu, Sheng Yang, Haoxiang Zhong, Linghui Zhu
Tencent Security Zhuque Lab Yang Bai
ShanghaiTech University Zhe Zhao
Harbin Institute of Technology, Shenzhen Linshan Hou

Citation

If our toolbox is useful for your research, please cite our paper(s) as follows:

@inproceedings{li2023backdoorbox,
  title={{BackdoorBox}: A Python Toolbox for Backdoor Learning},
  author={Li, Yiming and Ya, Mengxi and Bai, Yang and Jiang, Yong and Xia, Shu-Tao},
  booktitle={ICLR Workshop},
  year={2023}
}
@article{li2022backdoor,
  title={Backdoor learning: A survey},
  author={Li, Yiming and Jiang, Yong and Li, Zhifeng and Xia, Shu-Tao},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2022}
}

backdoorbox's People

Contributors

20000yshust avatar b34c0n5 avatar chengxiao-luo avatar cyndixxxxx avatar doris1007 avatar guanhaogan avatar hxzhong1997 avatar kuofenggao avatar landandland avatar persistz avatar snyk-bot avatar songsci1024 avatar spicy1007 avatar thuyimingli avatar uooga avatar yamengxi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

backdoorbox's Issues

Emprical study on effect of `poisoned transform train index` ?

Hi big brother, I'm a graduate student from sjtu doing research on backdoor learning, thank you for the project it is of great help to my study. I've experimented with different timing of injecting backdoor trigger in torchvision transforms previously, and I'm curious that do you have empirical results on the effect of injecting backdoor triggers at different stage of image augmentation?

Thank you very much for your time.

对SCALE-UP过滤效果的一点疑问

我用SCALE-UP对badnets-cifar10(all2one)进行了过滤,T=0.5,结果如下:
`
==========Test result on BA==========
[2024-08-09_16:53:09] Top-1 correct / Total: 9195/10000, Top-1 accuracy: 0.9195, Top-5 correct / Total: 9972/10000, Top-5 accuracy: 0.9972, time: 2.826444625854492

==========Test result on ASR==========
[2024-08-09_16:53:13] Top-1 correct / Total: 9742/10000, Top-1 accuracy: 0.9742, Top-5 correct / Total: 9962/10000, Top-5 accuracy: 0.9962, time: 3.306087017059326

---------data-free scenario----------
tp: 1512, tn: 7402, fp: 1793, fn: 8230
TPR: 15.52
FPR: 19.50
AUC: 0.4508
f1 score: 0.23177742009657393
---------data-limited scenario----------
tp: 1512, tn: 7402, fp: 1793, fn: 8230
TPR: 15.52
FPR: 19.50
AUC: 0.4508
f1 score: 0.23177742009657393
`
非常坏数据,使我焦头烂额。请问能否提供一下SCALE-UP对badnets-cifar10的超参数或实验结果?

Bug in attacks base.py

当我想使用多 GPU 时,我将 scheduler 做了如下改动:'CUDA_VISIBLE_DEVICES': '0, 1', 'GPU_num': 2,然后出现了如下错误:

Traceback (most recent call last):
  File "example.py", line 146, in <module>
    badnets.train(schedule)
  File "/BackdoorBox-main/core/attacks/base.py", line 167, in train
    self.model = self.model.to(device)
UnboundLocalError: local variable 'device' referenced before assignment

我认为这个错误出现在使用多个 GPU 时,没有将 device 指定为 GPU,导致程序后续在移动数据到指定设备上时,发现之前并没有指定 device。
我的解决方案是添加指定 device 的一行代码,截图如下:
1692158471099

我不知道这样的解决方案是否是最佳的。我期待你们修复这个 bug 🥂

ModuleNotFoundError: No module named 'curves'

image

运行tests的一些example时,在core/models的resnet_curve.py中会报出如上错误。但IDE并没有警告或者报错,只有当运行时才会出现。 请问是否有人遇到该问题?谢谢。

Bug in WaNET attack

Hi, this is some amazing work and is really helpful for people just starting out. I think I might have found a bug and wanted to let you know.

In attacks -> WaNET.py -> add_trigger(self, img, noise=False)

  • If there is noise, the grid is calculated as: grid = self.grid + ins / self.h
  • However, in the next line grid is recalculated as: grid = torch.clamp(self.grid + ins / self.h, -1, 1)

Im also attaching an image

Screenshot 2024-02-24 150651

Bug in example.py

Traceback (most recent call last):
File "/home/objdet/Desktop/backdoor/BackdoorBox/example.py", line 66, in
badnets = core.BadNets(
TypeError: init() got an unexpected keyword argument 'poisoned_transform_index'

When I ran the code example.py, the above error was thrown out. And I have checked the code in core/attacks/BadNets, it does not have the argument poisoned_transform_index in function init().

Could you fix the bug?

Thanks.

Cannot reproduce Sleeper-agent

The script "test_SleeperAgent.py" with cifar10 dataset achieves only 9.99% ASR after poisoning:

Epoch 100[2023-06-01_00:45:58] train_acc: 99.80, test_acc: 93.11, source_asr: 8.60, full_asr: 9.99

关于Spectral的一点问题

Spectral中计算tp,tn,fp,fn这些使用的不是sklearn而是开发者自己写的compute_metric.py,请恕我才疏学浅理解不能。
```
def compute_confusion_matrix(precited,expected):
predicted = np.array(precited,dtype = int)
expected = np.array(expected,dtype = int)
part = precited
expected # 对结果进行分类,亦或使得判断正确的为0,判断错误的为1

且不论precited和predicted分别是什么(说到底precited是哪国语言?),是否可以使用通用一点的方法做异或计算?事实上我用spectral所得出的实验数据完全离谱,tp+fn既不是投毒样本量也不是干净样本量。

另:我用badnets对cifar10投毒,并以test_Spectral.py中的代码执行过滤,最终tp=58, tn=46473, fp=1027, fn=2442。也许是我做错了,但我衷心建议为防御方法附上开发者的一个执行结果及参数设置以作为参考,毕竟论文里的数据并非总值得信任。

毒化数据的保存格式与位置

你好,在BackdoorBox-main/BackdoorBox-main/tests/test_BadNets.py中,毒化数据是如何保存的呢?因为在SCALE-UP-main/SCALE-UP-main/test_BadNets.py文件中保存的毒化样本似乎有着格式问题。

对输出结果的疑问

你好,后门攻击的输出结果的含义有点不明白,输出如何查看攻击成功率呀
ZWROFIV {EJ91P$C{YDAC S

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.