1. Attack: Geometry-Aware Adversarial Patches in the Physical World:
Adversarial Patch is an important form of adversarial attack in the physical world. However, the current adversarial patch methods mainly optimize the patch’s texture to achieve attacks (we call them as Texture-Aware Adversarial Patches). These physical attacks encounter several significant challenges. Firstly, these attacks often take the form of irregular noise patterns, which lack naturalness and make them less covert in real-world scenarios. Secondly, when transitioning from the digital space to the physical domain, the attack patterns are prone to distortions, compromising their effectiveness. Additionally, environmental factors such as variations in shooting angles, lighting conditions, and distances during real-world implementations can introduce further distortions, making it difficult to maintain consistency and reliability. These challenges highlight the complexity of designing robust physical attacks in real-world applications.
To meet these challenges, our group proposes the Geometry-Aware Adversarial Patches in the Physical World by adversarially optimizing the geometric attributes of patches, such as their location, rotation and shape, etc., we create more natural and imperceptible adversarial patch attacks. This form of adversarial patch can improve the robustness of adversarial attacks in the physical world effectively. The representative works in our group in this direction are as follows: (1) Location-aware adversarial stickers for face recognition models, which optimizes the location and rotation of existing meaning stickers on the object to perform physical attacks. (2) Shape-aware adversarial patches, which can be applied on the visible and infrared domain at the same time, and thus achieving the cross-modal physical attacks. (3) Texture-Geometry joint adversarial patches, where we show that the proposed geometry-aware adversarial patch can also combine the traditional texture-aware adversarial patch, and thus achieving better attack performance.
Selected published papers:
Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su, Xiaochun Cao*, "Distributional Location-Aware Adversarial Patch for Facial Images"; IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024, accepted.
Xingxing Wei*, Yao Huang, Yitong Sun, Jie Yu, "Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in the Physical World", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023, accepted.
Xingxing Wei*, Ying Guo, Jie Yu, "Adversarial Sticker: A Stealthy Attack Method in the Physical World", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022, accepted.
Xingxing Wei*, Ying Guo, Jie Yu, Bo Zhang, "Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attack", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022, accepted.
Xingxing Wei*, Jie Yu, Yao Huang, "Infrared Adversarial Patches with Learnable Shapes and Locations in the Physical World", International Journal of Computer Vision (IJCV), 2023, accepted.
2. Defense: Mitigating Trade-off between Nature Accuracy and Adversarial Robustness:
Deep neural networks (DNNs) can achieve excellent performance on various complex tasks. However, recent studies have found that deep neural networks face serious security risks, namely adversarial attacks. Adversarial attacks refer to adding subtle perturbations to input data that are imperceptible to humans, causing deep neural networks to produce wrong outputs. Current research often faces difficulties in the Trade-off between accuracy and adversarial robustness, as well as the Trade-off between overall robustness and worst robustness (fairness issues), which poses challenges for DNNs in practical applications. Therefore, finding effective methods to address those difficult trade-off phenomena, is an urgent and challenging issue in deep learning.
This direction aims to address the challenge of balancing the adversarial robustness of deep neural networks. The specific research content includes the following three aspects: Aiming at mitigating the Trade-off from network architecture, (1) A robust dynamic neural network architecture design based on a dynamic routing mechanism is proposed: Based on dynamic neural network architecture, the design of an inference model sensitive to input data, adaptively select the optimal network inference path and weight parameters for inference. Aiming at mitigating the Trade-off between accuracy and adversarial robustness, (2) A multi-teacher adversarial knowledge distillation method is proposed: For dynamic neural network architecture, adopt teacher models with strong adversarial robustness and accuracy, responsible for weight training of adversarial examples and clean examples, respectively. Aiming at enhancing robust fairness, (3) An anti-bias adversarial distillation method is proposed: Hard and easy classes are guided by teacher labels with different smoothness degrees, which are controlled by temperatures.
Selected published papers:
Xingxing Wei, Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su, "Real-World Adversarial Defense against Patch Attacks based on Diffusion Model", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2025, accepted.
Xingxing Wei, Shiji Zhao, Bo Li*, "Revisiting the Trade-off between Accuracy and Robustness via Filters’ Weight Distribution", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024, accepted.
Xingxing Wei*, Songping Wang, Huanqian Yan, "Efficient Robustness Assessement via Adversarial Spatial-Temporal Focus on Videos", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023, accepted
Shiji Zhao, Xizhe Wang, Xingxing Wei*: "Mitigating Accuracy-Robustness Trade-off via Balanced Multi-Teacher Adversarial Distillation", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024, accepted.
Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao, "Improving Fast Adversarial Training with Prior-Guided Knowledge", IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024, accepted.
Shiji Zhao, Xizhe Wang, Ranjie Duan, Xingxing Wei*: "Improving Adversarial Robust Fairness via Anti-bias Soft Label Distillation", NeurIPS2024, accepted.