In the rapidly evolving landscape of Autonomous Driving (AD) technology, AD vehicles are becoming an integral part of our daily lives. Compliance with traffic signs is essential for all vehicles, no matter if they are high-autonomy AD vehicles (e.g., those for robo-taxi), semi-autonomous AD vehicles (e.g., those with Tesla Autopilot), or conventional human-driven vehicles. Failure to obey these rules can lead to accidents, posing a threat to human life.
Due to the importance of traffic sign detection, a natural question is whether AD vehicles are truly as secure as we hope. To answer this critical question, recent research in security analysis of Traffic Sign Recognition (TSR) systems has highlighted vulnerabilities to a wide range of physical adversarial attacks that can significantly impair the traffic sign detection accuracy. Among them, the most representative and also the most widely-exploited attack vectors are physical patches or posters, which are low-cost, highly deployable, and demonstrated capable of causing various highly severe attack effects. For instance, they can make critical legitimate traffic signs undetectable, or hiding attacks, and trigger false detection at any attacker-chosen positions, or appearing attacks. Such attacks can cause various potential safety hazards such as traffic sign violations, unexpected emergency braking, speeding, etc. Due to such a high potential for practical impacts, these physical-world adversarial attacks on TSR have drawn wide attention across not only the technology community but also the general public.
Despite such high practical impact potentials, so far existing works generally only considered evaluating the attack effects on academic TSR models, leaving the impacts of these attacks on real-world commercial TSR systems largely unclear. A few recent works tried to understand such commercial TSR system-level impacts, but their evaluation is all limited to one particular vehicle model, sometimes even an unknown one, making both the generalizability and representativeness of these evaluation results questionable. This thus raises a critical research question: Can any of the existing physical-world TSR adversarial attacks achieve a general impact on commercial TSR systems today?
In this work, (1) we conduct the first large-scale measurement of physical-world adversarial attacks against commercial TSR systems; (2) we discover a spatial memorization design that commonly exists in today's commercial TSR systems, which can keep memorizing a sign detection result until the sign’s reaction need in the spatial domain is met (e.g., when the vehicle passes the detected sign's position); (3) we mathematically model the impact of this design on the TSR system-level attack success for both hiding and appearing attacks, resulting in new attack success metric designs that can systematically consider the spatial memorization effect. We then use them to revisit the evaluations, designs, and capabilities of existing attacks in this problem space. (4) Through the commercial TSR system measurements, new metric designs and analysis, and the revisiting of existing attacks, we uncover a total of 7 novel observations compared to existing knowledge in this problem space, some of which directly challenge the observations or claims in prior works due to the introduction of the new attack success metrics.
Vehicles: 4 out of these 5 models are tested by us: Not to directly reveal the exact model by including 1 confusing vehicle model.
TSR functions of the four vehicle models tested in our measurement study:
Attacks (RP_2, SIB, FTE) and surrogate models (Yolo V5: Y5 and Faster RCNN: FR) with STOP sign and 25 mph speed limit sign types
Test setup: Our experiments are performed outdoors during sunny afternoons between 1 pm and 4 pm, to simulate the most common real-world attack scenarios. To maintain consistent testing conditions, we measure the ambient light levels using a light meter, ensuring that all tests are conducted within a light range of 25,000 to 30,000 lux.
Observation 1: It is in fact possible for existing physical-world adversarial attack works from academia to have highly reliable (100%) attack success against certain commercial TSR system function in practice. However, such black-box commercial system attack capability is currently not generalizable over different representative commercial system models and sign types. Overall, the black-box transfer attack success rate on commercial systems (at least on our setup that can account for at least 33.2% of commercial TSR systems sold in the U.S. in 2023) is much lower than that on academic models in prior works.
Observation 2: We discover a spatial memorization design that commonly exists in today’s commercial TSR systems, which can keep memorizing a sign detection result until the sign’s reaction need in the spatial domain is met (e.g., when the vehicle passes the detected sign’s position). This design may create a significant discrepancy between the TSR model-level attack effect and that at the TSR system level.
Observation 3: Due to spatial memorization, hiding attacks are theoretically harder (if not equally hard) than appearing attacks in achieving TSR system-level attack success. Such an attack hardness gap can be huge (e.g., ⩾93.8% absolute differences in the attack success rate values). Meanwhile, due to the lack of consideration of spatial memorization, existing TSR model-level attack success metrics can be highly misleading in judging the TSR system-level attack effect, with a potential of having ∼50% absolute attack success rate value differences.
Observations for Revisiting Existing Research
[NDSS'25] Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective
Ningfei Wang, Shaoyuan Xie, Takami Sato, Yunpeng Luo, Kaidi Xu, Qi Alfred Chen
ISOC Network and Distributed System Security (NDSS) Symposium, 2025. (Acceptance rate TBA)
BibTex for citation:
@inproceedings{wang2025revisiting,
title={{Revisiting Physical-World Adversarial Attack on Traffic Sign Recognition: A Commercial Systems Perspective}},
author={Wang, Ningfei and Xie, Shaoyuan and Sato, Takami and Luo, Yunpeng and Xu, Kaidi and Chen, Qi Alfred},
booktitle={ISOC Network and Distributed System Security Symposium (NDSS)},
year={2025}
}
Ningfei Wang, Ph.D. student, University of California, Irvine
Shaoyuan Xie, Ph.D. student, University of California, Irvine
Takami Sato, Ph.D. student, University of California, Irvine
Yunpeng Luo, Ph.D. student, University of California, Irvine
Kaidi Xu, Assistant Professor, Drexel University
Qi Alfred Chen, Assistant Professor, University of California, Irvine
This research was supported by
NSF under grants CNS-1929771 and CNS-2145493;
USDOT under Grant 69A3552348327 for the CARMEN+ University Transportation Center.