Existing backdoor attack works on lane detection:
•Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving (ACM MM’22)
•Towards Robust Physical-world Backdoor Attacks on Lane Detection (ACM MM’24)
Existing work's Limitations:
L1: Existing work put the trigger on the fixed (MM'22) / random (MM'24) positions
Random trigger placement risks positioning triggers in semantically irrelevant or low-attention regions, reducing their memorization by the model during training and thereby diminishing attack efficacy.
L2: Forensics detector can detect backdoor samples / the samples are easy to be discovered
Existing backdoor attack methods often exhibit limited stealthiness, primarily due to their reliance on conspicuous trigger designs. Some approaches inject triggers at fixed spatial locations, making them visually salient and easily recognizable by human observers. Others superimpose predefined trigger patterns onto input images directly often neglecting semantic coherence with the surrounding visual context. Such approaches not only compromise the perceptual realism of the image but also introduce distinguishable artifacts in the frequency domain, making them easily detectable and defendable by forensic detection techniques
Our work:
We present DBALD, a novel and stealthy backdoor attack framework that systematically optimizes both trigger placement and stealth-aware generation for lane detection systems.
To solve L1: using attack-strategy heatmap to generate optimal placements.
To solve L2: using the diffusion model to generate real backdoor triggers.