FusionRipper: First Attack on MSF-based AV Localization

Summary

Today, various companies are developing high-level self-driving cars, e.g., Level-4 Autonomous Vehicles (AV), and some of them are already providing services on public roads such as self-driving taxi from Google’s Waymo One and Baidu Apollo Go. To enable such high-level driving automation, the Autonomous Driving (AD) system in an AV needs to not only perform the perception of surrounding obstacles, but also centimeter-level localization of its own global positions on the map. Such localization function is highly security & safety critical in the AV context, since positioning errors can directly cause an AV to drive off road or onto a wrong way. One direct threat to it is GPS spoofing, but fortunately, AV systems today predominantly use Multi-Sensor Fusion (MSF) algorithms that are generally believed to have the potential to practically defeat GPS spoofing. However, no prior work has studied whether today’s MSF algorithms are indeed sufficiently secure under GPS spoofing, especially in AV settings.

In this work, we perform the first study on the security of MSF-based localization in AV settings. We find that the state-of-the-art MSF-based AD localization algorithm can indeed generally enhance the security, but have a take-over vulnerability that can fundamentally defeat the design principle of MSF, but only appear dynamically and non-deterministically. Leveraging this insight, we design FusionRipper, a novel and general attack that opportunistically captures and exploits take-over vulnerabilities. We perform both trace-based and simulation-based evaluations, and find that FusionRipper can achieve >= 97% and 91.3% success rates in all traces for off-road and wrong way attacks respectively, with high robustness to practical factors such as spoofing inaccuracies.

Targeted MSF Implementations & Representativeness

Our study mostly focuses on a production-grade MSF implementation, Baidu Apollo MSF (BA-MSF), due to its high representativeness in both design (Kalman Filter based MSF) and implementation (centimeter-level accuracy evaluated by real-world AV fleet). To demonstrate the attack generality, we also evaluate it against two other popular KF-based MSF implementations: JS-MSF and ETH-MSF.

Attack Model & Goal

Attack model: Tailgating attack vehicle. We target an attack scenario where an attack vehicle tailgates a victim AV while launching GPS spoofing, which is both practical & effective as evaluated by previous work using real cars.

Attack goals. We consider 2 concrete attack goals: (1) off-road attack, which aims at deviating the victim AV to either left or right until the victim drives off the road pavement, and (2) wrong-way attack, which aims at deviating the victim AV to the left until the victim drives on the opposite traffic lane. This can cause a variety of road hazards such as driving off road to hit road curbs, falling down the highway cliff, or being hit by other vehicles that fail to yield, especially when the AV is on the wrong way.

Novel Attack Design: FusionRipper

Observation: Take-over vulnerability. We first perform an analysis on the upper-bound attack effectiveness, and discover that when the MSF is in relatively unconfident periods, which is due to a combination of dynamic and non-deterministic real-world factors such as sensor noises and algorithm inaccuracies, GPS spoofing is able to cause exponential growths of deviations in the MSF output. This allows the spoofed GPS to become the dominating input source in the fusion process and eventually cause the MSF to reject other input sources, which thus fundamentally defeats the design principle of MSF.

FusionRipper attack design. Since the vulnerable periods are created dynamically and non-deterministically, we design FusionRipper, a novel and general attack that opportunistically captures and exploits the take-over vulnerabilities with 2 stages: (1) vulnerability profiling, which measures when vulnerable periods appear, and (2) aggressive spoofing, which performs exponential spoofing to exploit the take-over opportunity.

Evaluation & Impact

Attack evaluation. We evaluate FusionRipper on 6 real-world sensor traces from both Apollo and the KAIST Complex Urban dataset. Our results show that when the attack can last 2 min, there always exists a set of attack parameters for FusionRipper to achieve >= 97% & 91.3% success rate in all traces for off-road and wrong-way attacks respectively with <= 35 sec success time on average. To understand the attack practicality, we evaluate it with practical factors such as (1) spoofing inaccuracies, and (2) AD control taking effect, and find that for both cases the attack success rates are affected by <= 4%.

End-to-end attack impact. To demonstrate the end-to-end safety consequences, we use LGSVL, a production-grade AD simulator that can interface with Apollo v5.0. We simulate 2 attack scenarios with one attacking to the left of the road and another to the right, and find that FusionRipper can successfully deviate the victim AV to hit the road barrier or traffic sign. The recorded attack demos can be found below.

Attack Demos

In the 2 attack demos below, we show how the FusionRipper attack can cause the victim AV, which runs the end-to-end Baidu Apollo Autonomous Driving (AD) system, to hit the road barrier on the left or the stop sign on the right.

Attack to the Left - Hit Road Barrier

Attack to the Right - Hit Stop Sign

Experimental configurations in the demos:

  • Baidu Apollo version: r5.0.0

    • Enabled modules: Localization, Perception, Prediction, Planning, Routing, Control, Transform, Dreamview

    • We disabled traffic signals and stop signs in the planning module for longer continuous driving trajectories

  • LGSVL simulator version: 2019.10

  • AV driving speed: 1 m/s

  • Playback speed: 4 x

FAQ

Is FusionRipper attack specific to BA-MSF in Baidu Apollo?

No, we take BA-MSF as a case study in our work because of its high representativeness in both the design and implementation. In fact, the take-over vulnerability is a fundamental problem to KF-based MSFs due to the sensor noises and algorithm inaccuracies according to our theoretical analysis. Additionally, we also evaluate our attack on two other popular KF-based MSF implementations, which are also found vulnerable to our attack.

Due to the lack of information, it is unclear whether other AV companies are vulnerable to our attack. However, since our attack is general to KF-based MSF by design, if other AV companies also adopt such a representative design, at least at design level they are also susceptible to the discovered take-over vulnerability.

Is FusionRipper applicable to Tesla Autopilot?

No. The scope of our work is the localization algorithm in high-level AD systems (e.g. Level-4). Tesla Autopilot is a Level-2 AD system, which does not use the sensor and algorithm setup studied in our work for centimeter-level global localization (i.e., high-end LiDAR, GPS, and IMU along with MSF algorithms). Instead, they typically only use camera-based lane detection for local localization (i.e., within the current lane) to achieve automatic lane keeping, and rely on human driver to take over at any time when necessary.

Did you evaluate the attack in real world?

No. Conducting real-world experiments on AVs requires enormous amount of engineering efforts and budgets. Such experiments can also bring safety and ethical concerns. Thus, even for the AV companies, they also heavily rely on trace-based and simulation based evaluation, when developing their AD systems. In this work, we follow such common practice in our attack evaluation. Moreover, we not only conduct the traces-based evaluation, but also rigorously evaluate the attack under practical considerations such as spoofing inaccuracies and end-to-end simulation with the complete AD system operating.

Is GPS spoofing practical?

Yes. GPS spoofing is a practical attack vector, which has been demonstrated on a variety of end systems such as smartphones ([Zeng et al., 2018] [Narain et al., 2019]), drones ([Kerns et al., 2014]), yachts ([Bhatti et al., 2017]), and even Tesla cars. Recently, a year-long investigation identified 9,883 spoofing events that affected 1,311 civilian vessel systems in Russia since 2016. Although GPS spoofers are illegal to be sold in the U.S., they can be made cheaply from commercial off-the shelf components with a cost as cheap as $223 ([Zeng et al., 2018]).

Is it really practical to track victim AV's position in real-time?

We assume the attacker owns an AV and can leverage AD perception algorithms to track the physical position of the victim. Accurate position tracking of surrounding obstacles is a basic task for AVs.

So, the attacker needs to own an AV to perform the attack? What is her/his motivation?

Yes, we assume such an attack setting because high-level AVs naturally have the capability of tracking the physical positions of surrounding obstacles for ensuring correct and safe driving.

Our attack is able to deviate the victim AV from the traffic lane to violate the traffic rules as well as exhibiting unsafe driving behaviors. These can already damage the reputation of the corresponding AV company. Thus, a likely attack incentive as business competition. which allow one AV company to deliberately damage the reputation of its rival companies and thus unfairly gain competitive advantages. This is especially realistic today considering that there are over 40 companies competing in the AV market.

Meanwhile, considering the direct safety impact, we also cannot rule out the possible incentives for terrorist attacks or targeted murders, e.g., against civilians, or controversial politicians or celebrities.

How to defend against it?

Our attack depends on GPS spoofing, so one direct defense direction is to leverage existing GPS spoofing detection techniques, e.g., signal power monitoring ([Akos, 2012]) and multi-antenna based signal arrival angle detection ([Psiaki et al., 2016]), or prevention techniques, e.g., cryptographic authentication based civilian GPS infrastructure ([Psiaki et al., 2016]). Unfortunately, neither GPS spoofing detection nor prevention are fully-solve problems today. Another fundamental defense direction is to improve the positioning confidence of MSF, since it is the root cause to the take-over vulnerability. However, it is unclear when such breakthroughs can take place.

Although the fundamental defense directions are not immediately deployable, one promising direction is to mitigate the attack by leveraging independent positioning sources to cross-check the localization results and thus serve as fail-safe features for AD localization. For example, since FusionRipper will cause the victim AV to deviate from the current lane, they should be detectable by camera-based lane detection, a mature technology available in many vehicle models today. Please check out our paper (available after responsible disclosure) for more detailed discussions.

Is the camera-based lane detection already used in AVs for fail-safe purpose?

No. Unfortunately, we find that in the high-level AD system design today, the camera-based lane detection has not been generally considered for fail-safe purposes. For example, the latest release of Baidu Apollo (version 5.5) uses it only for camera calibration. This might be because the lane detection output is local localization (i.e., within the current lane boundaries), and thus cannot be directly used for comparison against global localization from MSF.

Research Paper

[USENIX Security'20] Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing

Junjie Shen, Jun Yeon Won, Zeyuan Chen, Qi Alfred Chen

Appeared at 29th USENIX Security Symposium (USENIX Security '20), Boston, MA, Aug. 2020. (acceptance rate 16.1% = 157/977)

An extended version is available on arXiv.

[New] Attack and benign traces are now available for downloading here. A detailed README is provided in the tar file.

BibTex for citation:

@inproceedings{sec:2020:junjie:fusionripper,

title={{Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing}},

author={Junjie Shen and Jun Yeon Won and Zeyuan Chen and Qi Alfred Chen},

booktitle={Proceedings of the 29th USENIX Security Symposium (USENIX Security '20)},

year={2020},

month = {August},

address = {Boston, MA}

}


Team

Junjie Shen, Ph.D. student, CS, University of California, Irvine

Jun Yeon Won, M.S. student, CS, University of California, Irvine

Zeyuan Chen, B.S. student, EECS, University of California, Irvine

Qi Alfred Chen, Assistant Professor, CS, University of California, Irvine

Acknowledgements