Adversarial Sensor Attack on LiDAR-based AV Perception

Summary

In Autonomous Vehicles (AVs), one fundamental pillar is perception, which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored.

Target: LiDAR-based Perception in Production-Grade AV

To perform the study, we target the LiDAR-based perception implementation in Baidu Apollo, an open-source AV system that has over 100 partners and has reached a mass production agreement with multiple partners such as Volvo and Ford. Baidu Apollo’s LiDAR-based perception pipeline leverages machine learning for object detection as with the majority of the state-of-the-art LiDAR-based AV perception techniques.

Attack Model

We consider LiDAR spoofing attack, i.e., injecting spoofed LiDAR data points by shooting lasers, as our threat model since it has demonstrated feasibility in previous work. With this threat model, we set the attack goal as adding spoofed obstacles in close distances to the front of a victim AV (or front-near obstacles) in order to alter its driving decisions.

Novel Security Analysis Methodology: Adv-LiDAR

Limitation of blind sensor spoofing. In our study, we first reproduce the LiDAR spoofing attack from the work done by Shin et al. and try to exploit Baidu Apollo’s LiDAR-based perception pipeline. We find that the current spoofing technique can only cover a very narrow spoofing angle, i.e., 8◦ horizontally in our experiments, which is not enough to generate a point cloud of a road obstacle near the front of a vehicle. Thus, blindly applying existing spoofing techniques cannot easily succeed.

Improved attack methodology: Adv-LiDAR. To achieve the attack goal with existing spoofing techniques, we explore the possibility of strategically controlling the spoofed points to fool the machine learning model in the object detection step. While it is known that machine learning output can be maliciously altered by carefully-crafted perturbations to the input, no prior work studied LiDAR-based object detection models for AV systems. To approach this problem, we formulate the attack task as an optimization problem, which has been shown to be effective in previous machine learning security studies. Specific to our study, we need to newly formulate an input perturbation function that models LiDAR spoofing capability in changing machine learning model input. Since previous work did not perform detailed measurements for the purpose of such modeling, we experimentally explore the capability of controlling the spoofed data points, e.g., the number of points and their positions. Next, we design a set of global spatial transformation functions to model these observed attack capabilities at the model input level. In this step, both the quantified attack capabilities and the modeling methodology are useful for future security studies of LiDAR-related machine learning models.

Improving optimization effectiveness with sampling. With the optimization problem mathematically formulated, we start by directly solving it using optimization algorithms like previous studies. However, we find that the average success rate of adding front-near obstacles is only 30%. We find that this is actually caused by the nature of the problem, which makes it easy for any optimization algorithm to get trapped in local extreme. To solve this problem, we design an algorithm that combines global sampling and optimization, which is able to successfully increase the average success rates to around 75%.

Case Study at Driving Decision Level

As a case study for understanding the impact of the discovered attack input at the AV driving decision level, we construct two attack scenarios: (1) emergency brake attack, which may force a moving AV to suddenly brake and thus injure the passengers or cause rear-end collisions, and (2) AV freezing attack, which may cause an AV waiting for the red light to be permanently “frozen” in the intersection and block traffic. Using real-world AV driving data traces released by the Baidu Apollo team, both attacks successfully trigger the attacker-desired driving decisions in Apollo’s simulator.

Attack Demo

In this short video demo, we show the two end-to-end attack scenarios we construct based on our adv-LiDAR attack: emergency brake attack and AV freezing attack.

Experiment configurations in the demo:

  • System version: Baidu Apollo 3.0
  • Simulation software: SimControl (simulation software provided by Baidu Apollo)
  • Sensor trace: Real-world LiDAR sensor data trace released by Baidu Apollo team, which is collected for 30 seconds on local roads using Velodyne HDL-64E S3 at Sunnyvale, CA.

Research Paper

[CCS'19] Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, and Z. Morley Mao

To appear in the 26th ACM Conference on Computer and Communications Security (CCS'19), London, UK, Nov. 2019. (acceptance rate (Feb) 14.2% = 32/225)

BibTex for citation:

@inproceedings{ccs:2019:yulong:adv-lidar,
  title={{Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving}},
  author={Yulong Cao and Chaowei Xiao and Benjamin Cyr and Yimeng Zhou and Won Park and Sara Rampazzi and Qi Alfred Chen and Kevin Fu and Zhuoqing Morley Mao},
  booktitle={Proceedings of the 26th ACM Conference on Computer and Communications Security (CCS'19)},
  year={2019},
  month = {November},
  address = {London, UK}
}


Team

Yulong Cao, Ph.D student, EECS, University of Michigan

Chaowei Xiao, Ph.D student, EECS, University of Michigan

Benjamin Cyr, Ph.D student, EECS, University of Michigan

Yimeng Zhou, Undergraduate student, EECS, University of Michigan

Won Park, Ph.D student, EECS, University of Michigan

Sara Rampazzi, Research Investigator, EECS, University of Michigan

Qi Alfred Chen, Assistant Professor, CS, University of California, Irvine

Kevin Fu, Professor, EECS, University of Michigan

Z. Morley Mao, Professor, EECS, University of Michigan


Acknowledgements