With the advances of deep learning, license plate recognition (LPR) based on deep learning has been widely used in public transport such as electronic toll collection, car parking management and law enforcement. Deep neural networks are proverbially vulnerable to crafted adversarial examples, which has been proved in many applications like object recognition, malware detection, etc. However, it is more challenging to launch a practical adversarial attack against LPR systems as any covering or scrawling to license plate is prohibited by law. On the other hand, the created perturbations are susceptible to the surrounding environment including illumination conditions, shooting distances and angles of LPR systems.
To this end, we propose the first practical adversarial attack, named as RoLMA, against deep learning based LPR systems. We adopt illumination technologies to create a number of light spots as noises on license plate, and design targeted and non-targeted strategies to find out the optimal adversarial example against HyperLPR, a state-of-the-art LPR system. We physicalize these perturbations on a real license plate by virtue of generated adversarial examples. Extensive experiments demonstrate that RoLMA can effectively deceive HyperLPR with a 89.15% success rate in targeted attacks and 97.3% in non-targeted attacks. Moreover, our experiments also prove its high practicality with a 91.43% success rate towards physical license plates, and imperceptibility with around 93.56% of investigated participants being able to correctly recognize license plates and only 21.76% of them being aware of artificial illumination.
Figure 1. : The system overall of RoLMA
To convert the original license plate to an adversarial one, we propose a Robust Light Mask Attack (RoLMA). It proceeds with six phases in Figure 1:
Photographing, to take a photo of the license plate; illumination is to spotlight the digital license plate with several LED lamps;
Realistic Approximation to transform the current image in three manners—brightness adjustment, image scaling, and image rotation. Considering when the LPR camera is photographing a license plate, it may be away from the plate, much higher than the plate, or influenced by reflected lights on the plate. Therefore, we make these transformations to simulate the reality and thereby increase the robustness of the attack;
HyperLPR testing to determine whether the processed image can fool HyperLPR;
Loss Calculation is performed to calculate the loss value after a recognition task. It happens when HyperLPR produces failed recognition or correct recognition results. According to the loss result, RoLMA can adjust the illumination parameters for another iteration of image transformation;
Physical Implementation connects the simulation experiment with the physical environment. When HyperLPR wrongly recognizes the transformed image, we decorate the physical license plate with a workable illumination solution.