Too Afraid to Drive: Systematic Discovery of Semantic DoS Vulnerability in Autonomous Driving Planning under Physical-World Attacks
Summary
In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly security-critical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes.
To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problem-specific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previously-unknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes.
Threat Model and Attack Goal
We assume that the attacker could exploit such vulnerabilities via controlling common physical-world road objects. We choose such a threat model because the attacker can more realistically launch such an attack in a practical setting since the attacker does not need to compromise or tamper with the internals of the victim AD system and the objects can pretend to be benign once they follow basic traffic laws and driving norms. The attacker may aim for two possible attack goals: (1) causing an emergency/permanent stop, and (2) causing the victim to give up a mission-critical driving decision, such as necessary left/right turns and lane changing on the route.
Outline of the Attack Demos
We have demonstrated three different semantic DoS vulnerabilities with simulation: (1) lane following DoS attack on Autoware, (2) lane changing DoS attack on Apollo, and (3) intersection passing DoS Attack on Apollo
We have also demonstrated the Lane Following DoS attack with driving trace collected from a real vehicle
We also demonstrated the possible rear-end collision caused by lane following DoS vulnerability with simulation
Attack Video Demos
Experiment Setup for Below Simulations
Attack Target: Baidu Apollo 5.0 & Autoware.AI 1.14.0 (Open Planner)
Apollo is a high-performance, flexible architecture which accelerates the development, testing, and deployment of Autonomous Vehicles. It provides public robotaxi service [link1, link2] in China and has been named as one of the four leading developers of AD systems by Navigant research [link3].
We used Apollo 5.0 for the case studies below as it is the latest repository [lgsvl/apollo-5.0] modified and configured by LGSVL team for compatibility and thus more stable in general. The latest version 6.0 only made very minor changes (e.g., parameter settings) to 5.0 for the main planning scenarios we tested on (change logs: stop sign, lane changing). We have confirmed that all vulnerabilities we discover from 5.0 also exist in 6.0. In the simulation case studies below, we also have updated the planning parameters with those in version 6.0 [Change 1].
Autoware is the world's first "all-in-one" open-source software for autonomous driving technology. The capabilities of Autoware are primarily well-suited for urban cities, but highways, freeways, and geofenced areas can be also covered.
We used OpenPlanner from the latest version of Autoware.AI (1.14.0).
Simulator: LGSVL Simulator 20.06
LGSVL is an open-source industry-grade Unity-based simulator designed specifically for evaluating production-level Autonomous Driving (AD) systems.
It leverages Unity’s built-in physics engine to accurately simulate the vehicle dynamics and tire-road interaction, and provide photo-realistic simulation of the driving environment.
Simulation Scenarios & Attack Goals:
Lane following (Autoware): Make the victim AD vehicle permanently stop by placing off-the-road cardboard boxes.
Intersection passing (Apollo): Force the victim AD vehicle to permanently stop in front of an intersection guarded by stop signs due to two roadside parked bicycles.
Lane changing passing (Apollo): Force the victim AD vehicle to give up the lane changing decision so that it can not follow its route.
Lane Following DoS Attack on Autoware
Benign scenario
The AD vehicle passes off-the-road cardboard boxes in the benign scenario setup.
Attack scenario
The victim AD vehicle permanently stops due to off-the-road cardboard boxes.
Scenario Setup in simulation:
Map: single lane road with lane width 2.7m
Benign scenario setup:
Size of left-side box: 0.5m * 0.5m * 0.5m
Size of right-side box: 1.0m * 1.0m * 1.0m
The minimal lateral distance between the boundaries of two boxes: 4.45m
The minimal longitudinal distance between the boundaries two boxes: 7.25m
Attack scenario setup:
Size of left-side box: 0.5m * 0.5m * 0.5m
Size of right-side box: 1.0m * 1.0m * 1.0m
The minimal lateral distance between the boundaries of two boxes: 4.35m
The minimal longitudinal distance between the boundaries two boxes: 7.25m
Forward speed before meeting the boxes: 3m/s
Real-world experiment of Autoware lane following DoS attack
An overview of the autonomous driving vehicle used in real-world experiment.
Snapshot of equipped sensors in our real-world experiment.
Attack scenario setup.
Real-world experiment setup:
We manually drive a real AD vehicle to collect a driving trace (including sensor data) under the lane following the DoS attack and study the planning behavior. Due to the safety concern and limitations of the testing facility, we do not enable the functionality of autonomous driving.
Real vehicle information:
Attack scenario setup:
Single lane scenario: we manually mark the lane with white tape. The lane width is 3.5m.
Static objects:
Left side: cardboard box: rough size (0.6m * 0.6m * 0.4m)
Right side: trash can (rough size is 0.4m * 0.4m * 1m)
Position: both of them are off-the-lane (no intersection with the line marker)
Autoware setup:
Due to the safety concern and limitations of the testing facility, we can only drive the car for a limited distance (less than 10m) at a low speed (less than 1m/s).
Benign Scenario
The AD vehicle normally drives inside the lane.
Attack Scenario
The victim AD vehicle makes stop decision due to the off-the-lane static objects.
Comparison between benign scenario and attack scenario.
Demo: Possible Rear-end Collision Caused by Lane Following DoS Vulnerability in Autoware
Simulation scenario:
The ego vehicle is driving on the highway and is going to leave the highway through a one-way exit.
Another vehicle is following the ego vehicle with the same velocity.
The attacker maliciously places two off-lane objects (one traffic cone and one cardboard box) off the lane.
Goal of the demo:
We want to use this demo to demonstrate the possibility of potential rear-end collision caused by the lane following DoS vulnerability in certain driving conditions.
The AD vehicle makes a sharp stop due to the off-lane static objects (traffic cone and cardboard box). A following vehicle with a possible driver reaction delay fails to react in time and leads to a rear-end collision.
Such road-side objects are very common in the real world. We have shown some similar scenario setups in the real world below.
Intersection Passing DoS Attack on Apollo
Benign scenario*
The AD vehicle smoothly proceeds and passes the intersection with stop signs.
Attack scenario
The victim AD vehicle permanently stops due to two roadside parked bicycles despite the intersection is completely empty.
*Slow down in the middle of the video: In order to show the traffic situation of the intersection, we slow down (0.3x speed) the video in the middle (from 8s to 12s) to provide a panoramic view. The AD vehicle spends less time waiting in front of the stop line compared with the waiting time in the video.
Scenario Setup:
Map: San Francisco provided by LGSVL [Link].
The scenario is set up around a 4-way stop sign intersection.
Benign scenario setup:
No changes to the map environment
Attack scenario setup:
Two parked bicycles are placed around the intersection.
Size of each parked bicycle (bounding box): 1.803m * 0.386m * 0.949m
Position of each parked bicycle: The center of each bicycle is 5m away from the closest lane centerline.
Lane Changing DoS Attack on Apollo
Benign scenario
The AD vehicle successfully changes lane in the scenario when another vehicle is following it.
Attack scenario
In the same scenario, the victim AD vehicle gives up a necessary lane changing decision even though the lane it needs to change to is empty and the attack vehicle following it shows no intention to change to that lane.
Scenario Setup:
Map: San Francisco provided by LGSVL [Link].
The scenario is set up around an intersection controlled by traffic signals. Lane width is 3.5m.
Benign scenario setup:
The following vehicle follows the AD vehicle's trajectory with a slight right deviation (0.2m).
Attack scenario setup:
The following vehicle follows the AD vehicle's trajectory with a slight left deviation (0.6m).
Routing setup:
We set up the destination such that the AD vehicle has to turn left at the front intersection. The routing setup for benign and attack scenarios are exactly the same.
Following vehicle controlling:
We implement a script to control the following vehicle via Python APIs provided by LGSVL [example]. We predefined a list of waypoints for the attack/following vehicle to follow and make sure that the distance between two vehicles are within the range required for the lane changing DoS attack.
Research Paper
[NDSS'22] Too Afraid to Drive: Systematic Discovery of Semantic DoS Vulnerability in Autonomous Driving Planning under Physical-World Attacks
Ziwen Wan, Junjie Shen, Jalen Chuang, Xin Xia, Joshua Garcia, Jiaqi Ma, Qi Alfred Chen
Appeared in the Network and Distributed System Security (NDSS) Symposium, 2022 (Acceptance rate: 14.06% =53/377 for fall review cycle)
[PDF] [Slides] [Video Demos] [Code Release]
BibTex for citation:
@inproceedings{ndss:2022:ziwen:planfuzz,
title={{Too Afraid to Drive: Systematic Discovery of Semantic DoS Vulnerability in Autonomous Driving Planning under Physical-World Attacks}},
author={Ziwen Wan and Junjie Shen and Jalen Chuang and Xin Xia and Joshua Garcia and Jiaqi Ma and Qi Alfred Chen
},
booktitle={Network and Distributed System Security (NDSS) Symposium, 2022},
year={2022},
month = {April}
}
News Coverage
Armed with Traffic Cones, Protesters are Immobilizing Driverless Cars, National Public Radio (NPR), August 26, 2023.
Research reveals Autonomous Vehicles can be Tricked to Drive dangerously, Analytics Drift, June 03, 2022.
Autonomous Vehicles Can Be Tricked into Dangerous Driving Behavior, ACM Tech News, June 01, 2022.
UCI Researchers: Autonomous Vehicles Can Be Tricked Into Dangerous Driving Behavior, Cleantechnica, May 29, 2022.
Roadside Objects Can Trick Driverless Cars, Futurity, May 27, 2022.
Autonomous Vehicles Can Be Tricked into Dangerous Driving Behavior, University of California News, May 26, 2022.
New Research Helps Autonomous Vehicles Navigate Intersections More Safely and Effectively, IEEE Innovation.
Team
Ziwen Wan, Ph.D. student, CS, University of California, Irvine
Junjie Shen, Ph.D. student, CS, University of California, Irvine
Jalen Chuang, Undergraduate student, CS, University of California, Irvine
Xin Xia, Postdoctoral, Civil and Environmental Engineering, University of California, Los Angeles
Joshua Garcia, Assistant Professor, Informatics, University of California, Irvine
Jiaqi Ma, Associate Professor, Civil and Environmental Engineering, University of California, Los Angeles
Qi Alfred Chen, Assistant Professor, CS, University of California, Irvine