Neural Invisibility Cloak: Concealing Adversary in Images via Compromised AI-driven Image Signal Processing
Wenjun Zhu, Xiaoyu Ji, Xinfeng Li, Qihang Chen, Kun Wang, Xinyu Li, Ruoyan Xu, Wenyuan Xu
USSLAB, Zhejiang University
USENIX Security '25
Wenjun Zhu, Xiaoyu Ji, Xinfeng Li, Qihang Chen, Kun Wang, Xinyu Li, Ruoyan Xu, Wenyuan Xu
USSLAB, Zhejiang University
USENIX Security '25
What is Neural Invisibility Cloak?
The Neural Invisibility Cloak (NIC) is an attack on AI-powered camera systems that makes this possible. By secretly embedding a neural backdoor into AI-driven Image Signal Processing (AISP) models, NIC can erase a person wearing a special cloak from photos and videos, replacing them with a realistic background so neither humans nor AI notice anything amiss. Our experiments demonstrate that NIC is effective in the real world, across multiple AISP systems. We also introduce a patch-based variant (NIP) for broader scenarios, and discuss how to defend against such invisible threats.
Interactive Comparison Demos (Drag the line to explore)
See more details in the paper, code, and presentation.
BibTeX Citation
@inproceedings{zhu2025neural,
title={Neural Invisibility Cloak: Concealing Adversary in Images via Compromised AI-driven Image Signal Processing},
author={Zhu, Wenjun and Ji, Xiaoyu and Li, Xinfeng and Chen, Qihang and Wang, Kun and Li, Xinyu and Xu, Ruoyan and Xu, Wenyuan},
booktitle={34th USENIX Security Symposium (USENIX Security 25)},
year={2025}
}