Collaborative Perception Security

Summary

Collaborative perception, which greatly enhances the sensing capability of connected and autonomous vehicles (CAVs), also expose potential security risks. CAVs' driving decisions rely on remote untrusted data. Therefore, there are potential data fabrication attacks in which the attacker delivers crafted malicious data to victims in order to perturb their perception results, leading to hard brakes or increased collision risks. In this work, we break the ground by proposing various realistic data fabrication attacks and corresponding mitigation methods.

Threat model: 

Summary of contributions:

Figure 1. Different types of collaborative perception. The attacker as a sender can fabricate the data in red boxes. 

Methods

Attack Attack constraints:

Zero-delay attack scheduling

The key idea is to parallelize attack generation and perception processes. The attack is always preparing the attack for the next frame (Figure 3).

Black-box ray casting attack on early-fusion systems

The attacker pretends that an object is spoofed or removed and reconstructs the LiDAR point cloud via ray casting techniques. The traced rays follow the physical laws of the original lasers so the reconstructed point cloud can be very realistic.

White-box adversarial attack on intermediate-fusion systems

The attacker optimizes a perturbation on the attacker's feature map by performing a backward pass in each LiDAR cycle and reusing the perturbation over frames as an online attack. A mask is applied on the perturbation to achieve targeted attacks. 

Figure 2. Illustration of the temporal order of message exchanges and data processing in collaborative perception. 

Figure 3. Overview of the zero-delay attack scheduling. 

Figure 4. Overview of Attacks. 

Mitigation

Each CAV broadcast an occupancy map indicating the occupied and free regions in the 2D space. Do the following consistency checks (Figure 5):

Figure 5.Overview of anomaly detection. 

Results

Experiment setup

Main results

Attacks achieve >90% success rate. The anomaly detection detects 90% of attacks with FPR < 5%.

Analysis of results

Attacks are general for different models (PointPillars/VoxelNet, Att/V2VNet)

Ray casting attacks are easier when the target is closer to the attacker. Intermediate-fusion adversarial attacks are not restricted by distances.

The attack is stronger when multiple attackers are present.

The defense is general for various attack methods.

Case study on MCity [3]

References

[1] Tu, James, et al. "Adversarial attacks on multi-agent communication." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.

[2] Xu, Runsheng, et al. "Opv2v: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication." 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022.

[3] https://mcity.umich.edu/ 

Research Paper

[To appear on USENIX Security 2024] On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures 

Qingzhao Zhang, University of Michigan

Shuowei Jin, University of Michigan

Ruiyang Zhu, University of Michigan

Jiachen Sun, University of Michigan

Xumiao Zhang, University of Michigan

Qi Alfred Chen, UC Irvine

Z. Morley Mao, University of Michigan