Too Afraid to Drive: Systematic Discovery of Semantic DoS Vulnerability in Autonomous Driving Planning under Physical-World Attacks

[New] Source code of PlanFuzz is released at: https://github.com/ASGuard-UCI/PlanFuzz!

Summary

In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly security-critical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes.


To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problem-specific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previously-unknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes.


Threat Model and Attack Goal

We assume that the attacker could exploit such vulnerabilities via controlling common physical-world road objects. We choose such a threat model because the attacker can more realistically launch such an attack in a practical setting since the attacker does not need to compromise or tamper with the internals of the victim AD system and the objects can pretend to be benign once they follow basic traffic laws and driving norms. The attacker may aim for two possible attack goals: (1) causing an emergency/permanent stop, and (2) causing the victim to give up a mission-critical driving decision, such as necessary left/right turns and lane changing on the route. 


Outline of the Attack Demos

Attack Video Demos

Experiment Setup for Below Simulations

Lane Following DoS Attack on Autoware

Benign scenario

The AD vehicle passes off-the-road cardboard boxes in the benign scenario setup.

Attack scenario

The victim AD vehicle permanently stops due to off-the-road cardboard boxes.

Scenario Setup in simulation:

Real-world experiment of Autoware lane following DoS attack

An overview of the autonomous driving vehicle used in real-world experiment.

Snapshot of equipped sensors in our real-world experiment. 

Attack scenario setup. 

Real-world experiment setup:

We manually drive a real AD vehicle to collect a driving trace (including sensor data) under the lane following the DoS attack and study the planning behavior.  Due to the safety concern and limitations of the testing facility, we do not enable the functionality of autonomous driving. 

Benign Scenario

The AD vehicle normally drives inside the lane.

Attack Scenario

The victim AD vehicle makes stop decision due to the off-the-lane static objects.

Comparison between benign scenario and attack scenario.

Demo: Possible Rear-end Collision Caused by Lane Following DoS Vulnerability in Autoware 

The AD vehicle makes a sharp stop due to the off-lane static objects (traffic cone and cardboard box). A following vehicle with a possible driver reaction delay fails to react in time and leads to a rear-end collision.


Such road-side objects are very common in the real world. We have shown some similar scenario setups in the real world below.

Intersection Passing DoS Attack on Apollo 

Benign scenario*

The AD vehicle smoothly proceeds and passes the intersection with stop signs.

Attack scenario

The victim AD vehicle permanently stops due to two roadside parked bicycles despite the intersection is completely empty. 

*Slow down in the middle of the video: In order to show the traffic situation of the intersection, we slow down (0.3x speed) the video in the middle (from 8s to 12s) to provide a panoramic view. The AD vehicle spends less time waiting in front of the stop line compared with the waiting time in the video. 

Scenario Setup:

Lane Changing DoS Attack on Apollo

Benign scenario

The AD vehicle successfully changes lane in the scenario when another vehicle is following it.

Attack scenario

In the same scenario, the victim AD vehicle gives up a necessary lane changing decision even though the lane it needs to change to is empty and the attack vehicle following it shows no intention to change to that lane.

Scenario Setup:

Research Paper

[NDSS'22] Too Afraid to Drive: Systematic Discovery of Semantic DoS Vulnerability in Autonomous Driving Planning under Physical-World Attacks 

Ziwen Wan, Junjie Shen, Jalen Chuang, Xin Xia, Joshua Garcia, Jiaqi Ma, Qi Alfred Chen

Appeared in the Network and Distributed System Security (NDSS) Symposium, 2022 (Acceptance rate: 14.06% =53/377 for fall review cycle)

[PDF] [Slides] [Video Demos] [Code Release]

BibTex for citation:

@inproceedings{ndss:2022:ziwen:planfuzz,

  title={{Too Afraid to Drive: Systematic Discovery of Semantic DoS Vulnerability in Autonomous Driving Planning under Physical-World Attacks}},

  author={Ziwen Wan and Junjie Shen and Jalen Chuang and Xin Xia and Joshua Garcia and Jiaqi Ma and Qi Alfred Chen

},

  booktitle={Network and Distributed System Security (NDSS) Symposium, 2022},

  year={2022},

  month = {April}

}

Team

Ziwen Wan, Ph.D. student, CS, University of California, Irvine

Junjie Shen, Ph.D. student, CS, University of California, Irvine

Jalen Chuang, Undergraduate student, CS, University of California, Irvine

Xin Xia, Postdoctoral, Civil and Environmental Engineering, University of California, Los Angeles

Joshua Garcia, Assistant Professor, Informatics, University of California, Irvine

Jiaqi Ma, Associate Professor, Civil and Environmental Engineering, University of California, Los Angeles

Qi Alfred Chen, Assistant Professor, CS, University of California, Irvine

Acknowledgments