Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

Summary

In autonomous driving (AD), accurate perception is indispensable to achieving safe and secure driving. Due to its safety-criticality, the security of AD perception has been widely studied. Among different attacks on AD perception, the physical adversarial object evasion attacks are especially severe. However, we find that all existing literature only evaluates their attack effect at the targeted AI component level but not at the system level, i.e., with the entire system semantics and context such as the full AD pipeline. Thereby, this raises a critical research question: can these existing researches effectively achieve system-level attack effects (e.g., traffic rule violations) in the real-world AD context? In this work, we conduct the first measurement study on whether and how effectively the existing designs can lead to system-level effects, especially for the STOP sign-evasion attacks due to their popularity and severity. Our evaluation results show that all the representative prior works cannot achieve any system-level effects. We observe two design limitations in the prior works: 1) physical model-inconsistent object size distribution in pixel sampling and 2) lack of vehicle plant model and AD system model consideration. Then, we propose SysAdv, a novel system driven attack design in the AD context and our evaluation results show that the system-level effects can be improved, i.e., the violation rate increases by around 70%.

System Model in AD Context

System Model for AD AI Adversarial Attacks

To understand the end-to-end system-level impacts of an adversarial attack against a targeted AI component in an AD system (e.g., whether it can indeed effectively cause undesired AD system-level property violations), we need to systematically consider and integrate the overall system semantics and context that enclose such AI component into the security analysis. In this paper, we call a systematic abstraction of such system semantics and context the system model of such AD AI adversarial attacks. Specifically, in the AD context we identify 3 essential sub-components in such system model: 1) the AD system model, i.e., the full-stack AD system pipeline that encloses the attack-targeted AI components and closed-loop control, e.g., the object tracking, planning, and control pipeline for the object detection AI component; 2) the vehicle plant model, which defines the physical properties of the underlying vehicle system under control, e.g., maximum/minimum acceleration/deceleration, steering rates, sensor mounting positions, etc.; and 3) the attack-targeted operation scenario model, which defines the physical driving environment setup, driving norms (e.g., traffic rules), and the system-level attack goal (e.g., vehicle collision, traffic rule violation, etc.) targeted by the AD AI adversarial attack.

System Model for Adversarial Object-Evasion Attacks

The figure above illustrates the aforementioned system model defined for the adversarial object-evasion attack. The AD system model for object detection (the targeted AI component in adversarial object-evasion attacks) mainly includes its downstream tasks of object tracking, planning, and control, and closed-loop control. The vehicle plant model mainly includes the physical properties related to longitudinal control, e.g., the minimum brake distance (d_{min}), and the distance to the stop line (stop to avoid violating traffic rules or crashes) where the stop line is out of sight in the camera image d_{oos} (depending on the hood length and the camera mounting position). The operation scenario model includes the speed limit, lane width, the relative positioning and facing of the object to the ego lane, the driving norm that the vehicle typically drives at constant speed before it starts to see the object (d_{max}), and the system-level attack goal that triggers the traffic rule violation (i.e., hit into the object or exceeding the stop line). There exists several example attacks for the system model such as STOP sign-evasion attack, which is the most extensively-studied physical adversarial object evasion attack in AD context; pedestrian-evasion attack; etc.

System-Level Effect of Prior Works

SysAdv Attack Evaluation

Research Paper

[ICCV 2023] Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

Ningfei Wang, Yunpeng Luo, Takami Sato, Kaidi Xu, Qi Alfred Chen

To appear in the International Conference on Computer Vision (Acceptance rate 26.15% = 2160/8260)

[PDF] [arXiv] [Code (coming soon)]

BibTex for citation:

@InProceedings{Wang_2023_ICCV, 

author = {Wang, Ningfei and Luo, Yunpeng and Sato, Takami and Xu, Kaidi and Chen, Qi Alfred}, 

title = {{Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack}}, 

booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, 

month = {October}, 

year = {2023}, 

pages = {4412-4423} 

}


Team

Ningfei Wang, Ph.D. student, CS, University of California, Irvine

Yunpeng Luo, Ph.D. student, CS, University of California, Irvine

Takami Sato, Ph.D. student, CS, University of California, Irvine

Kaidi Xu, Assistant Professor, CS, Drexel University

Qi Alfred Chen, Assistant Professor, CS, University of California, Irvine


Acknowledgments