WACV2026 - Workshop on Generative, Adversarial and Presentation Attacks in Biometrics (GAPBio)
07 March 2026
07 March 2026
Synthetic content creation has advanced rapidly in recent years due to deep learning. Newer architectures like Generative Adversarial Networks (GANs), Diffusion models, and Large Language Models (LLMs) can now produce ultra-realistic visual and textual content, from images and videos to highly coherent and persuasive text, challenging human perception and comprehension. While such realism is welcomed in sectors like entertainment and education, it also poses severe threats to secure access control systems in biometrics and to the integrity of digital information channels. Image and video manipulation attacks have evolved to defeat traditional biometric systems, with morphing attacks compromising multiple identities through a single manipulated image, and DeepFakes spreading misinformation or impersonating individuals. LLM-generated text further amplifies these risks by enabling sophisticated phishing, social engineering, and disinformation campaigns. Attack strategies now combine both traditional manipulation techniques and recent adversarial machine learning approaches (e.g., GANs, Diffusion, LLMs). To address these challenges, several governmental agencies are funding research for reliable detection and mitigation solutions.
The workshop is planned to report the advancements in creation, evaluation, impact and mitigation measures for adversarial attacks (soft and hard attacks) on biometrics systems. The workshop also targets submissions addressing the analyses and mitigation measures for function creep attacks. This half-day workshop is a seventh edition of the special session, previously held in conjunction with BTAS-2018, WACV-2020, WACV-2021, WACV-2022, WACV-2023, WACV-2024 and WACV-2025 respectively.
Papers are invited to report on following topics, but not limited to:
Physical attacks on biometric systems (e.g., mask, spoofing).
Image manipulation attacks in biometric verification and identification (e.g., PAD).
Video manipulation attacks affecting biometric systems.
Morphing attacks and their detection.
Generalizable and robust attack detection algorithms.
Forensic behavioral biometrics for identity verification.
Soft biometric cues for authenticity assessment of biometric data.
Multimedia forensics applied to biometric systems.
Integrity verification and authentication of digital content in biometrics.
Multimodal decision fusion for enhanced authenticity verification.
Function creep attacks compromising privacy in biometric systems.
Human perception and decision-making in biometric data authenticity verification.
Ethical, legal, and societal implications of emerging manipulations, including AI-generated content.
Case studies illustrating the above topics, including attacks leveraging GANs, Diffusion models, or LLMs.
Submission Guidelines:
ACKNOWLEDGEMENT: The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
Papers presented at the WACV workshops will be published as part of the "WACV Workshops Proceedings" and should, therefore, follow the same presentation guidelines as the main conference. Workshop papers will be included in IEEE Xplore, but will be indexed separately from the main conference papers. Paper submission guidelines of WACV can be accessed through this link.
For review, a complete paper should be submitted using the review format and the guidelines provided in the author kit. All reviews are double-blind, so please be careful not to include any identifying information, including the authors’ names or affiliations.
Accepted papers will be allocated 8 pages in the proceedings. Please note that References/Bibliography at the end of the paper will NOT count toward the aforementioned page limit. That is, a paper can be up to 8 pages + the references.
The submission template can be downloaded (Overleaf template, ZIP Archive).
Please submit your papers under this CMT link.
Camera-ready Submission Guidelines:
To be announced by email
Important Dates
Workshop: The workshop will take place on WACV 2026- 06 or 10 March 2026 (TBD)
Full Paper Submission: Nov 30, 2025 (23:59 PST) Dec 15, 2025 (23:59 PST) Completed
Acceptance Notice: Dec 30, 2025 (23:59 PST) Completed
Camera-Ready Paper: January 10, 2026 (23:59 PST)
Program (March 7, local time):
Location: Arizona Ballroom Salon 3-4
0900-0905
Welcome – Kiran Raja
0905-0950
Keynote-1 – Dr. Shiqi Yu
Title: Gait Recognition with Large Vision Models and Privacy-Preserving by Gait Editing
Abstract: This talk explores cutting-edge directions in gait recognition. First, we introduce how large vision models (LVMs) are revolutionizing the field. We will cover BigGait, which learns clean gait representations from LVMs by denoising irrelevant information like clothing and background, and its successor BiggerGait, which further improves performance by effectively utilizing features from different layers of various LVMs. Second, we address the critical privacy concerns raised by high-accuracy gait recognition. We present a framework that anonymizes pedestrians in videos by rewriting their biometric features. Instead of simple swapping or blurring, SAFE edits both the intrinsic body semantics and the extrinsic appearance in a continuous latent space. This talk connects the advancements in making recognition more powerful with the essential tools for making it privacy-preserving.
Bio.: Dr. Shiqi Yu is an Associate Professor at the Department of Computer Science and Engineering at the Southern University of Science and Technology (SUSTech). His main research area is gait recognition, and he has worked on it for more than 20 years. He created the CASIA-B gait database, which is widely used in gait recognition, and the OpenGait open-source project, which has become a major algorithm evaluation framework. He has published more than 100 papers in gait recognition at IEEE TPAMI, IEEE TIFS, IEEE TBIOM, PR, CVPR, ECCV, IJCB, etc.
0950-1010
StructFormer: Structure-Consistent Face De-Identification under Strong Privacy Constraints
Haini Zhu (Dalian University of Technology); Deepak Kumar Jain (Dalian University of Technology); Xudong Zhao (Dalian University of Technology); Muyu Li (Dalian University of Technology); Vitomir Struc (University of Ljubljana)*; Sumarga Kumar Sah Tyagi (Florida Agricultural and Mechanical University)
1010-1030
When Humans Judge Irises: Pupil Size Normalization as an Aid and Synthetic Irises as a Challenge
Mahsa Mitcheff (University of Notre Dame )*; Adam Czajka (University of Notre Dame)
1030-1100
Coffee Break
1100-1120
Robustness of Presentation Attack Detection in Remote Identity Validation Scenarios
Richard Plesh (IDSL)*; John Howard (IDSL); Yevgeniy Sirotin (IDSL); Jerry Tipton (IDSL); Arun Vemury (DHS S&T)
1120-1140
Moving Masks: A Preliminary Study on Face Presentation Attack On-The-Move
Raghavendra Ramachandra (Norwegian University of Science and Technology); Narayan Vetrekar (Goa University)*; Krishna Patel (Goa University); Marissa Ataide (Goa University); Sushma Venkatesh (Norwegian University of Science and Technology); Rajendra Gad (Goa University)
1140-1225
Keynote-2 - Marija Ivanovska
Title: Learning Beyond Known Attacks: Robust and Interpretable Face Morphing Detection
Abstract: Face morphing attacks compromise biometric verification by blending multiple identities into a single image, allowing different individuals to authenticate under the same biometric reference. Their scarcity, diversity, and continual evolution limit access to representative training data, making supervised detectors brittle and prone to overfitting to specific artifact patterns rather than learning generalizable manipulation cues.
This talk reframes morph detection as a representation modeling problem, centered on learning the bona fide face manifold and detecting deviations from it. I will discuss one-class learning for modeling authentic faces, self-supervised artifact simulation to improve cross-dataset generalization under privacy constraints, and the use of foundation and multimodal language models for semantically grounded, interpretable, and zero-shot forensic analysis.
Bio: Marija Ivanovska is a Research Assistant at the Faculty of Electrical Engineering, University of Ljubljana, Slovenia, and a member of the Laboratory for Machine Intelligence. Her research focuses on computer vision, anomaly detection, biometric security, data privacy, and foundation models for applied AI. She has held visiting positions as a Visiting Scholar at Johns Hopkins University and as a Visiting Researcher at Queensland University of Technology, fostering international research collaborations. Marija has authored numerous publications in leading peer-reviewed journals and top-tier conferences in computer vision and biometrics. She actively contributes to the research community by serving on program committees and organizing workshops and special sessions at venues such as WACV, BMVC, and IJCB. She is a member of the IEEE Information Forensics and Security Technical Committee and serves on the IEEE Biometrics Council Webinar Committee.