Speaker: Dr. Changqing Luo, University of Houston
Time: March 27th, 2026, 1:00 pm - 2:30 pm
Room: E297L, Discovery Park, UNT
Coordinator: Dr. Haihua Chen
Abstract: Developing AI (Artificial Intelligence) models requires specialized AI expertise and domain-specific knowledge, while training the models demands a large volume of high-quality training data and powerful computing resources. As a result, trained models are highly commercially and academically valuable and are thus treated as important intellectual property by their owners. However, AI models are increasingly vulnerable to theft, as they can be easily copied, extracted, and redistributed without authorization. To address this threat, a prominent line of research focuses on adversarial-example-based model fingerprinting, which leverages the key observation that decision boundaries are highly model-specific. Despite its promise, this approach faces fundamental challenges in simultaneously achieving both robustness and uniqueness. In this talk, we aim to advance adversarial-example-based model fingerprinting by enhancing both of these essential properties.
Bio of the speaker: Dr. Changqing Luo is currently an Assistant Professor with the Department of Information Science Technology and Department of Electrical and Computer Engineering at the University of Houston, USA. His current research interest fucuses on AI security. He received his Ph.D. degree in computer engineering from Case Western Reserve University, USA. He was an Assistant Professor with the Department of Computer Science at Virginia Commonwealth University (VCU) and a Lecturer with the School of Computer Science and Technology at Huazhong University of Science and Technology (HUST). He is a member of IEEE and ACM.