Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack (DRP attack)

[New] Source code of DRP is released at: https://github.com/ASGuard-UCI/DRP-attack!

Summary

Automated Lane Centering (ALC) is a Level-2 driving automation technology that automatically steers a vehicle to keep it centered in the traffic lane. Due to its high convenience for human drivers, today it is widely available on various vehicle models such as Tesla, GM Cadillac, Honda Accord, Toyota RAV4, Volvo XC90, etc. While convenient, such a system is high security and safety critical: When the ALC system starts to make wrong steering decisions, the human driver may not have enough reaction time to prevent safety hazards such as driving off-road or colliding into vehicles in adjacent lanes. Thus, it is imperative and urgent to understand the security property of ALC systems.

In an ALC system, the most critical step is lane detection, which is generally performed by Deep Neural Network (DNN) based lane detection as in Tesla Autopilot. Recent works show that DNNs are vulnerable to physical-world adversarial attacks such as malicious stickers on traffic signs. However, these methods cannot be directly applied to attack ALC systems due to two main design challenges. (1) In ALC systems, the physical-world attack generation needs to handle inter-dependencies among camera frames due to attack-influenced vehicle actuation. This causes the following frames to capture road areas more to the right, and thus directly affect their attack generation. (2) The optimization objective function designs in prior works are mainly for image classification or object detection models and thus aim at changing class or bounding box probabilities.

To fill this critical research gap, in this work, we are the first to systematically study the security of state-of-the-art deep learning based ALC systems in their designed operational domains under physical-world adversarial attacks. We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: Dirty Road Patches (DRP). To systematically generate the attack, we adopt an optimization-based approach and overcome domain-specific design challenges such as camera frame inter-dependencies due to attack-influenced vehicle control, and the lack of objective function design for lane detection models.

Novel Attack Vector: Dirty Road Patch (DRP)

- Road Patches:

  • Can appear to be legitimately deployed on traffic lanes in the physical world, e.g., for fixing road cracks.

  • Deployment is made easy with adhesive road patch designs (shown on the right).

  • Attacker can thus take time to prepare the attack in house by carefully printing the malicious input perturbations on top of such adhesive road patches, and pretend to be road workers to quickly deploy it

  • To avoid drawing too much attention, the attacker can pick a deployment time when the target road is the most vacant, e.g., in late night.

- Dirty Patterns:

  • It is common for real-world roads to have dirt or white stains (shown on the right).

  • Using similar dirty patterns as the input perturbations can allow the malicious road patch to appear more normal and thus stealthier.


Evaluation & Impact

  • Real-world Trace-based Evaluation:

    • The results show that our attack is highly effective with over 97.5% success rates and less than 0.903 sec average success time, which is substantially lower than the average driver reaction time. This attack is also found (1) robust to various real-world factors such as lighting conditions, (2) general to different model designs, and (3) stealthy from the driver’s view.

  • Physical-World Realizability Evaluation: Miniature-Scale Experiment

    • Robustness under different lighting conditions: The results show that the same attack patch above is able to maintain a desired steering angle of 20-24 to the left under all 12 lighting conditions,

    • Robustness to different viewing angles: Our results show that our attack always achieves over 23.4 to the left from all viewing angles. We record videos in which we dynamically change viewing angles in a wide range while showing real-time lane detection results under attack.

  • Software-in-the-Loop Simulation with Safety Implications

    • To understand the safety impact, we perform software-in-the-loop evaluation on LGSVL, a production-grade autonomous driving simulator. Our attack achieves 100% success rates from all 18 starting positions in both highway and local road scenarios

  • Safety Impact on Real Vehicle

    • In this experiment, we use a real vehicle to evaluate the safety impact of our attack when ALC is used with other driver assistance features such as Lane Departure Warning (LDW), Adaptive Cruise Control (ACC), Forward Collision Warning (FCW), and Automatic Emergency Braking (AEB).

    • In this experiment, we evaluate the safety impact by directly injecting an attack trace at the LD model output level. This can also avoid blocking the road for sticking patches to the ground and cleaning them up, which may affect other vehicles.

    • Our attack causes the vehicle to hit the cardboard boxes in all the 10 attack trials (100% collision rate), including 5 front and 5 side collisions.

Attack Demos and Evaluation

Physical-World Realizability Evaluation: Miniature-Scale Experiment

Experimental Setup (shown below):

  • Print high-resolution road texture on multiple ledger-size papers and concatenate them together to form a long straight road.

  • Print the malicious road patch using the same method, and place it on top of the miniature-scale road.

  • Mount the EON, the official OpenPilot dashcam device, on a tripod and face it to the miniature-scale road.

  • Road size, road patch size, and the EON mounting position are carefully calculated to represent OpenPilot installed on a Toyota RAV4 driving on a standard 3.6-meter wide highway at 1:12 scale.

  • Printer and camera: We use a commodity printer (RICOH MP C6004ex Color Laser Printer) and the official OpenPilot Dashcam (Sony IMX298 Exmor RS 16M Pixels).

Results:

  • Benign Scenario:

    • The detected lane lines align accurately with the actual lane lines on the miniature-scale road, and the desired driving path is straight.

    • Desired steering angle: 0.9 degrees to the right.

  • Attacked Scenario:

    • The detected lane lines are bent significantly to the left and cause the desired driving path to be curving to the left (shown on the right).

    • Designed steering angle in the attack generation time: 23.4 degrees to the left.

    • Desired steering angle observed in the miniature-scale experiment: 22.3 degrees to the left.


Robustness to Different Viewing Angles in Physical World

Experimental Setup:

  • Dynamically move official OpenPilot dashcam device to evaluate robustness of the attack patch to different patch viewing angles in the miniature-scale experiment setup (detailed in later section).

  • We consider 2 types of device movement styles:

    • (1) Lateral & longitudinal movements

      • Covered distances to the patch: 5-9 meters in real-world scale.

      • Covered lateral position shifting range: Range from the leftmost to the rightmost positions within the current lane

    • (2) Circular & random movements:

      • Move the dashcam circularly and randomly with more aggressive motions.

    • (3) Driving-like longitudinal movement

      • Move the dashcam forward towards the patch to mimic driving.

      • Repeated for 3 lateral position shifting: lane center, leftmost in lane, and rightmost in lane.

Lateral & Longitudinal Movements:

Attacked Lane Detection (1st Trial)

Benign Lane Detection

Attacked Lane Detection in a Far View

Side-by-Side Comparison of
Attacked and Benign Lane Detection (1st Trial)

Attacked Lane Detection (2nd Trial)

Circular & Random Movements:

Attacked Lane Detection

Benign Lane Detection

Driving-like longitudinal movement:

Attacked Lane Detection

Benign Lane Detection

Side-by-Side Comparison of
Attacked and Benign Lane Detection

Physical-World Realizability Evaluation: Impact from Different Lighting Conditions

In this experiment, we explore the impact from environmental lighting condition on the effectiveness of DRP attack.

Experimental Setup:

  • Light sources and light intensity control. There are 3 light sources in our experiments, and we control the light intensity of each of them as follows:

    • Light source 1: Studio lights

      • Intensity control: Each light has 4 levels of lighting intensity levels with different numbers of light bulbs turned on or off. We denote them as "All On", "3 On", "1 On", and "All Off".

    • Light source 2: Room light in the lab ceiling

      • Intensity control: The room light in the lab can be turned on or off. We denote them as "On" and "Off".

    • Light source 3: Window light from outdoor

      • Our lab window is facing east, and we conduct all experiments between 2-3 pm during sunny summer days.

      • Intensity control: The window in our lab has window blinds installed, which can be adjusted to "Open" or "Closed".

    • By combining the light intensity control for the 3 light sources above, we create 12 different lighting conditions in total.

  • Light intensity measurement:

    • For each lighting level, we measure the light intensity in lux using the popular lighting condition measurement app Lux Light Meter Pro.

    • The measurements are repeated 10 times for each lighting condition.

  • Adversarial road patch: We use the same adversarial patch as the one in the above section.

  • Printer and camera: We use a commodity printer (RICOH MP C6004ex Color Laser Printer) and the official OpenPilot Dashcam (Sony IMX298 Exmor RS 16M Pixels).

Results:

The experiment results are shown in Table 1. The key takeaways are:

  • The attack patch can maintain a desired steering angle of 20-24 degrees to left under a wide range of light intensity (14.7-1210 lux, similar to lighting conditions ranging from sunset/sunrise of an fully overcast day to midday of a overcast day as shown in Table 2), which are all significantly different from the desired steering angle in the benign scenario (0.9 degrees to right). This thus shows a high robustness of our attack patch to common lighting condition changes in the day time.

  • Such high attack robustness to lighting conditions might be because the OpenPilot camera has auto exposure adjustment, which can keep the brightness of the model input to be relatively the same even in different lighting conditions. However, such a feature also makes our attack more robust.

Table 1: Attack effectiveness under 12 different lighting conditions, ranked by average Lux (light intensity measure).

Table 2: Light intensity references (source: Wikipedia).

* Automatic exposure is enabled and makes the brightness of the above images similar.

Desired Steering Angles from 45 Different Viewing Angles

Experimental Setup

  • Place the official OpenPilot dashcam device at 45 different viewing angles (combinations of 5 longitudinal positions and 9 lateral positions) to the attack patch in total in the miniature-scale experiment setup.

    • 5 longitudinal distances to the patch: 5, 6, 7, 8, 9 meters in real-world scale.

    • 9 lateral offsets within current lane: -95% (almost leftmost), -75%, -50%, -25%, 0%, 25%, 50%, 75%, 95% (almost rightmost) of the maximum in-lane lateral shifting from lane center.

Software-in-the-Loop Simulation with Safety Implications

Simulation Configurations:

  • Attack Target: Production ALC system in OpenPilot 0.6.6

    • OpenPilot is an open-source production Level-2 driving automation system reported to have state-of-the-art performance similar to Tesla Autopilot and GM Super Cruise, and better than all others [Car&Driver] [Roadshow1] [Roadshow2].

    • It can be easily installed in over 80 popular vehicle models (e.g., Toyota, Chrysler, Cadillac, etc.) by mounting a dashcam to support Level-2 driving automation.

  • Simulator: LGSVL Simulator

    • LGSVL is an open-source production-grade Unity-based simulator designed specifically for evaluating production-level Autonomous Driving (AD) systems.

    • It leverages Unity’s built-in physics engine to accurately simulate the vehicle dynamics and tire-road interaction, and provide photo-realistic simulation of the driving environment.

    • It has already been demonstrated to be able to support production-grade AD systems such as Baidu Apollo and Autoware.

  • Simulation Scenarios & Attack Goals:

    • Local Road: Hit the truck driving in the opposite direction.

    • Highway: Hit the concrete barrier on the left.

Local Road Scenario:

Attacked Driving with Malicious Dirty Road Patch

Benign Driving with Base Color Only Patch

Benign Driving without Patch

Highway Scenario:

Attacked Driving with Malicious Dirty Road Patch

Benign Driving with Base Color Only Patch

Benign Driving without Patch

Safety Impact on Real Vehicle

In this experiment, we use a real vehicle to evaluate the safety impact of our attack when ALC is used with other driver assistance features such as Lane Departure Warning (LDW), Adaptive Cruise Control (ACC), Forward Collision Warning (FCW), and Automatic Emergency Braking (AEB).

Evaluation Methodology & Setup:

    • Attack Target: Production ALC system in OpenPilot 0.7.4

    • Vehicle Type: Toyota 2019 Camry

      • OpenPilot provides ALC, LDW, and ACC.

      • Camry stock features provide AEB and FCW.

    • Experiment Site: Rarely-used dead-end road that has a double-yellow line in the middle and can only be used for U-turn.

    • Driving speed: ~28 mph, the minimum speed for engaging OpenPilot on our Camry.

    • Attack method: Inject attack trace at the LD model output level.

      • Attack realizability & robustness under 12 different lighting conditions have been validated in the miniature-scale experiment below.

      • Since the experiment site is not a private road, this also avoids affecting other vehicles due to blocking the road for sticking printed patches to the ground and cleaning them up.

      • The injected attack trace is generated from our simulation environment above at the same driving speed (28 mph).

    • Obstacle setup: Place carbon boxes adjacent to but outside of the current lane, to mimic road barriers and obstacles in opposite direction

      • To ensure that we do not affect other vehicles, we place the carbon boxes only when the entry point of this dead-end road has no other driving vehicles in sight, and quickly remove them right after our vehicle passes them as required by the road code of conduct [1] [2] [3].

Results:

  • Our attack causes the vehicle to hit the carbon boxes in all 10 attack trials (100% collision rate), including 5 front and 5 side collisions.

    • Collision variations are due to randomness in dynamic vehicle control and timing differences in OpenPilot engaging and attack launching.

  • LDW, ACC, FCW, and AEB are not able to effectively prevent the safety damages caused by our attack on ALC.

    • LDW is not triggered in the all 10 attack trials since it uses the same lane detection process as the ALC.

    • ACC does not take any action since it does not detect a front vehicle to follow.

    • FCW is triggered 5 times out of the 10 collisions.

      • However, it is only a warning and thus cannot prevent the collision by itself.

      • Moreover, it is triggered only 0.46 sec on average (at most 0.74 sec) before the crash, which is too short for human drivers to reaction (average driver reaction time is at least 2.5 sec [1] [2] [3].

    • AEB is not triggered in the all 10 attack trials.

      • AEB (called pre-collision braking for Toyota) is used very conservatively today: Camry's manual says that AEB is triggered only when the possibility of a collision is extremely high.

      • Such conservative use of AEB can reduce false alarms and thus avoid mistaken sudden emergency brakes in normal driving, but also makes it difficult to effectively preventing the safety damages caused by our attack — in our experiments, it was not able to prevent any of the 10 collisions.

Video Recordings for the Real-Vehicle Experiments:

Driving under attack trace injection

Comparison with benign driving

FAQ

Is DRP attack specific to OpenPilot?

No. We think our current discovery and results can still generally benefit the understanding of the security of production ALC today. Also, since DNNs are generally vulnerable to adversarial attacks, if these other ALC systems also adopt the state-of-the-art DNN-based design, at least at design level they are also vulnerable to our attack.

Do you confirm the end-to-end DRP attack effectiveness with a real vehicle?

No. we did not perform direct end-to-end attack evaluation with real vehicles in the physical world. Such a limitation is caused by safety issues (vehicle-enforced minimum OpenPilot engagement speed at 28 mph, or 45 km/h) and access limits to private testing facilities (for patch placement). In the future, we hope to overcome this by finding ways to lower the minimum engagement speed and obtain access to private testing facilities.

What is the size of the patch required for a DRP attack?

Our attack can achieve a high success rate (93.8%) with only 8 pieces of quickly deployable 1.8m x 7.2m road patches, each requiring only 5-10 sec to deploy for 2 people.

To achieve the same goal, why can't the attacker just draw lane lines on the road?

As the nature of lane detection, drawing a line on the road can be an effective attack vector. However, the drivers can easily expect how the drawn lines affect ALC and take over driving immediately. Besides, Our results show that the drawing-lane-line attack is much less effective than the DRP attack. The DRP attack always has the highest attack success rate than these two baselines (with a 46% margin). More details are in Section 5.2 of our paper.

Can DRP attack be effective under black-box attack settings?

In this paper, we mainly focus on white-box attack settings because as the first study we need to first understand whether such an attack idea is feasible or not. In black-box attack settings, some parts of the current attack design cannot be used (i.e., the parts requiring gradients of the DNN models). Nevertheless, the remaining parts can still work to generate attack objects. Thus, our attack can easily extend to a black-box attack with gradient estimation (e.g., NES and SPSA) or a transfer-based approach.

Does the patch still work if there are still leaves or steins on the road?

While we have not directly evaluated the effect, our patch has a certain level of robustness against such surface changes as we design the attack robustness improvement (Section 4.3.4). Our results show our patch maintains the desired steering angle of 20-24 to the left under all 12 lighting conditions in the miniature-scale experiment. However, our attack should not work if the majority of the patch area is covered with leaves or steins. We note that such a heavy amount of leaves and steins may also affect other human drivers.

Is the patch stealth for pedestrians?

In local road scenarios, the stealthiness from the pedestrian’s view is also an aspect worth considering, as pedestrians may report anomalies if our attack patch looks too suspicious. Our user study includes the driver’s view at 1 second before the attack succeeds, which is 7 meters to the driver’s eyes so similar to the distance from the pedestrian on local roads. However, only <25% of the participants choose to take over driving, meaning that >75% do not think our attack patch at this distance looks suspicious enough to affect driving. This may be because the general public today does not know that dirty road patches can be a road hazard. We hope that our paper can expose this and thus help raise such awareness.

How to defend against DRP attack?

As we showed in Section 8.2.1, none of the model-based defenses can effectively defend against our attack without harming ALC performance in normal driving scenarios. Other possible defenses are sensor/data fusion-based defenses as Level-4 AD systems today use. However, LiDAR is too expensive for commodity vehicles with Level-2 AD systems at this point. For example, Elon Musk, the co-founder of Tesla, claims that LiDARs are “expensive sensors that are unnecessary (for autonomous vehicles)”. Fusion with map data is also commonly used in Level-4 AD systems. However, the localization of Level-2 AD systems is not so accurate as in Level-4 AD systems (typically at centimeter-level). Thus, a follow-up research question is how to effectively detect our attack without raising too many false alarms, since mismatched lane information can also occur in benign cases due to (1) vehicle position and heading angle inaccuracies when localized on the HD map, e.g., due to sensor noises in GPS and IMU, and (2) benign-case LD model inaccuracies.

How is DRP attack different from the previous attacks?

The only prior effort that studied adversarial attacks on a production ALC is from Tencent (their preprint and publication in Usenix Security '21), which fooled Tesla Autopilot to follow fake lane lines created by white stickers on road regions without lane lines. However, it is neither attacking the designed operational domain for ALC, i.e., roads with lane lines, nor generating the perturbations systematically by addressing the design challenges we identified.

Did you perform responsible vulnerability disclosure to AD companies? What are their replies?

We initiated the responsible disclosure process early in August 2020 to 13 companies that are developing ALC systems. 10 (77%) have replied saying that they have started investigations. Some already had meetings with us to facilitate such investigations. We are now following up with these companies with new evaluation results.

Research Paper

[Usenix Security'21] Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack

Takami Sato*, Junjie Shen* (co-first authors), Ningfei Wang, Yunhan Jack Jia, Xue Lin, and Qi Alfred Chen

Published in the 30th USENIX Security Symposium (USENIX Security'21), Aug 2021 (Acceptance rate 18.7%)

@inproceedings{sec:2021:sato:drpattack,

title={{Dirty Road Can Attack: Security of Deep Learning based Automated Lane Centering under Physical-World Attack}},

author={Takami Sato and Junjie Shen and Ningfei Wang and Yunhan Jia and Xue Lin and Qi Alfred Chen},

booktitle={Proceedings of the 29th USENIX Security Symposium (USENIX Security '21)},

year={2021}

}

[PDF] [Extended Version] [Slides] [Talk] [Code/Data Release]

Earlier versions:

Team

Acknowledgements