Giter Club home page Giter Club logo

poltergeistattack's Introduction

What is PoltergeistAttack?

Autonomous vehicles increasingly exploit computer-visionbased object detection systems to perceive environments and make critical driving decisions. To increase the quality of images, image stabilizers with inertial sensors are added to alleviate image blurring caused by camera jitter. However, such a trend opens a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of the emerging image stabilizer hardware susceptible to acoustic manipulation and the object detection algorithms subject to adversarial examples. By emitting deliberately designed acoustic signals, an adversary can control the output of an inertial sensor, which triggers unnecessary motion compensation and results in a blurred image, even if the camera is stable. The blurred images can then induce object misclassification affecting safety critical decision making. We model the feasibility of such acoustic manipulation and design an attack framework that can accomplish three types of attacks, i.e., hiding, creating, and altering objects. Evaluation results demonstrate the effectiveness of our attacks against four academic object detectors (YOLO V3/V4/V5 and Faster R-CNN), and one commercial detector (Apollo). We further introduce the concept of AMpLe attacks, a new class of system-level security vulnerabilities resulting from a combination of adversarial machine learning and physics-based injection of information-carrying signals into hardware.

How does PoltergeistAttack work?

attack

Simulation Evaluation

Blur model

Adversarial blurry images are generated by our blur model with public autonomous driving image datasets as input. Our blur model can be used as a function in blur.py.

Datasets

We use two autonomous driving datasets BDD100K and KITTI in the simulation evaluation. For both datasets, we randomly select 200 images for evaluation.

You can download the randomly-sampled images used in our experiments.

Link here -> google drive

Object Detectors

We evaluate PG attacks using four academic object detectors YOLO V3/V4/V5 and Faster R-CNN, and one commercial object detector YOLO 3D used in Apollo. The backbone networks used for the pre-trained models YOLO V3/V4/V5 and Faster R-CNN are Darknet-53 and ResNet-101, respectively. The four academic detectors are all trained on the Common Objects in Context (COCO) dataset and Apollo is trained on an unrevealed backbone network and dataset.

The following repos are used in our experiments.

  1. YOLO V3: PyTorch-YOLOv3
  2. YOLO V4: pytorch-YOLOv4
  3. YOLO V5: yolov5
  4. Faster R-CNN: tf-faster-rcnn
  5. Apollo 5.5.0

Bayesian Optimization

To optimize the designed objective functions, we employ Bayesian Optimization, a sequential design strategy for global optimization of blackbox functions that does not assume any functional forms.

Simulation Demos

  • Hiding attacks (HA) cause an object to become undetected.

The originally detected car can be incorrectly detected under linear motion blur. (a) The car is detected with a high confidence score (0.997) without blur. (b) The car is detected with a decreased confidence score (0.919) with light linear motion blur. (c-d) The car is undetected when linear motion blur is increased.

  • Creating attacks (CA) induce a non-existent object.

The originally undetected region in an image (a). Under different linear motion blur, the region can be incorrectly detected as a person class (b), a boat class (c), and a car class (d).

  • Altering attacks (AA) cause an object to be misclassified.

The car (a) can be misclassified as a bus (b), a bottle (c), and a person (d) under different motion blur.

Real-world Attack Evaluation

In the real-world evaluation, we target a smartphone on a moving vehicle and conduct PG attacks towards it inside the vehicle via acoustic signals. Here are some demo videos.

Citation

@INPROCEEDINGS {
    author = {X. Ji and Y. Cheng and Y. Zhang and K. Wang and C. Yan and W. Xu and K. Fu},
    booktitle = {2021 2021 IEEE Symposium on Security and Privacy (SP)},
    title = {Poltergeist: Acoustic Adversarial Machine Learning against Cameras and Computer Vision},
    year = {2021},
    publisher = {IEEE Computer Society},
    address = {Los Alamitos, CA, USA},
    month = {may}
}

Contact

Powered by

Ubiquitous System Security Laboratory (USSLab)

USSLab logo

Zhejiang University

ZJU logo

poltergeistattack's People

Contributors

forget2save avatar kamille-hand avatar jiahui-young avatar doublewn avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.