Adv LiDAR

In Autonomous Vehicles (AVs), one fundamental pillar is perception, which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process. Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function. We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility. We also discuss defense directions at the AV system, sensor, and machine learning model levels.

Attack Demo

In this short video demo, we show the two end-to-end attack scenarios we construct based on our adv-LiDAR attack: emergency brake attack and AV freezing attack.

Experiment setup in the demo:

  • System version: Baidu Apollo 3.0
  • Simulation software: Baidu Apollo SimControl
  • Sensor trace: Real-world LiDAR sensor data trace released by Baidu Apollo team, which is collected for 30 seconds on local roads using Velodyne HDL-64E S3 at Sunnyvale, CA.

Research Paper

[CCS'19] Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving

Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, and Z. Morley Mao

To appear in the 26th ACM Conference on Computer and Communications Security (CCS'19), London, UK, Nov. 2019. (acceptance rate (Feb) 14.2% = 32/225)

BibTex for citation:

@inproceedings{ccs:2019:yulong:adv-lidar,
  title={{Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving}},
  author={Yulong Cao and Chaowei Xiao and Benjamin Cyr and Yimeng Zhou and Won Park and Sara Rampazzi and Qi Alfred Chen and Kevin Fu and Zhuoqing Morley Mao},
  booktitle={Proceedings of the 26th ACM Conference on Computer and Communications Security (CCS'19)},
  year={2019},
  month = {November},
  address = {London, UK}
}


Team

Yulong Cao, Ph.D student, EECS, University of Michigan

Chaowei Xiao, Ph.D student, EECS, University of Michigan

Benjamin Cyr, Ph.D student, EECS, University of Michigan

Yimeng Zhou, Undergraduate student, EECS, University of Michigan

Won Park, Ph.D student, EECS, University of Michigan

Sara Rampazzi, Postdoc, EECS, University of Michigan

Qi Alfred Chen, Assistant Professor, CS, University of California, Irvine

Kevin Fu, Professor, EECS, University of Michigan

Z. Morley Mao, Professor, EECS, University of Michigan

Read more on CAV (Connected and Autonomous Vehicle) systems security.