Prediction-ADV

Summary

Trajectory prediction is a critical component for autonomous vehicles (AVs) to perform safe planning and navigation. However, few studies have analyzed the adversarial robustness of trajectory prediction or investigated whether the worst-case prediction can still lead to safe planning. To bridge this gap, we study the adversarial robustness of trajectory prediction models by proposing a new adversarial attack that perturbs normal vehicle trajectories to maximize the prediction error. Our experiments on three models and three datasets show that the adversarial prediction increases the prediction error by more than 150%. Our case studies show that if an adversary drives a vehicle close to the target AV following the adversarial trajectory, the AV may make an inaccurate prediction and even make unsafe driving decisions. We also explore possible mitigation techniques via data augmentation and trajectory smoothing.

Contributions:

  • We propose the first adversarial attack and adversarial robustness analysis on trajectory prediction for AVs considering real-world constraints and impacts.

  • We report a thorough evaluation of adversarial attacks on various prediction models and trajectory datasets.

  • We explore mitigation methods against adversarial examples via data augmentation and trajectory smoothing.

Adversarial Attack

Attack Model

In this paper, we focus on the setting where the adversary drives one vehicle called ``the other vehicle'' (OV) along a crafted trajectory. The AV observes the OV and applies iterative trajectory prediction which produces the predicted trajectory of the OV at each time frame. The adversary controls the OV's whole trajectory to maximize the prediction error or make the AV take unsafe driving behaviors.

The figure below demonstrates one example of the attack. By driving along a crafted trajectory, the OV seems be changing its lane in the AV's prediction while the OV is actually driving straightly. Given the high-error prediction, the AV takes brake to yield the OV. If the AV brakes on the highway, it is a serious safety hazard that may cause rear-end collisions.

Attack Objectives

The attack appends perturbation on normal trajectories. The attack chooses the optimized perturbation which maximizes the prediction error on the OV's adversarial trajectory. We use six metrics to evaluate the prediction error.

  • Average displacement error (ADE).

  • Final displacement error (FDE).

  • Longitudinal deviation along front/rear direction.

  • Lateral deviation along left/right direction.

Attack Methods

We optimize the perturbation to maximize prediction error (i.e., one of the six metrics) using two approaches:

  • White-box projected gradient descent (PGD).

  • Black-box particle swarm optimization (PSO).

We add constraints on the perturbation.

  • Hard bound of deviation (e.g., 1 meter).

  • Bounds on physical properties (e.g., velocity, acceleration).

Mitigation methods

  • Data augmentation. Augment trajectories for training by adding random noise. Let the model learn the pattern of perturbed trajectories.

  • Trajectory smoothing. Use a linear smoothing algorithm to smooth trajectories as preprocessing step. Aim to filter the high-frequency pattern of the perturbation.

Case study

The figure below shows one scenario where the adversarial perturbation spoofs a fake lane change and causes a hard brake of the AV. In this scenario, the other vehicle (OV) is driving alongside the AV (we omit other objects for demonstration) and the prediction is accurate in this case (lateral deviation is 0.18 meters). After perturbation (deviation bound of 0.5 meters, 3-second length, maximizing deviation to left), the average deviation to left is significantly increased to 1.27 meters (7x).

What is worse, the high error directly affects the decision making of the AV. At time frame 0-2, the predicted OV's trajectory crosses with the AV's future trajectory, looking like a lane changing behavior. According to AV planning logic (e.g., open-source planning code of Baidu Apollo), the AV will try to stop behind the cross point to yield the OV and the deceleration reaches 12 m/s^2, which exceeds the maximum deceleration of normal driving configured in AV software. Such a hard brake substantially increases the risk of rear-end collisions.

After applying the train-time data augmentation, the deviation to left is reduced to 0.91 meters. Though the predicted trajectory and AV's future trajectory cross, the AV only needs a deceleration of 6 m/s^2.

In simulator LGSVL[1], Apollo[2] AV applies hard brake to respond the fake lane changing.

Evaluation & Findings

Experiment setup

  • Datasets: Apolloscape[3], NGSIM[4], nuScenes[5]

  • Models: GRIP++[6], FQA[7], Trajectron++[8]

Findings

  • Generally, the attack is successful. Increased prediction error by > 150%. 62.2% of the attacks cause a deviation larger than 1.85 meters (half of lane width).

  • Scenario matters. The prediction is hard in high-acceleration scenarios, e.g., stop signs, turning at intersections.

  • Auxiliary features help. Training with the map information increases the robustness.

  • Black-box attacks have similar performance to white-box attacks.

  • Launching the attack on consecutive frames is harder.

  • More findings in the full paper …

More results are in the below table.

References

[1] SVL Simulator by LG. https://www.svlsimulator.com/

[2] Baidu Apollo. https://www.apollo.auto/

[3] Apolloscape Dataset. http://apolloscape.auto/

[4] NGSIM. https://ops.fhwa.dot.gov/trafficanalysistools/ngsim.htm

[5] nuScenes Dataset. https://www.nuscenes.org/

[6] Li et. al. GRIP++. https://github.com/xincoder/GRIP

[7] Kamra et. al. FQA. https://github.com/nitinkamra1992/FQA

[8] Salzmann et. al. Trajectron++. https://github.com/StanfordASL/Trajectron-plus-plus

Research Paper

[CVPR 2022] On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles

Qingzhao Zhang, University of Michigan

Shengtuo Hu, University of Michigan

Jiachen Sun, University of Michigan

Qi Alfred Chen, UC Irvine

Z. Morley Mao, University of Michigan