WACV2025 - Workshop on Manipulation, Generative, Adversarial, and Presentation Attacks in Biometrics
4 March 2025
4 March 2025
Synthetic content creation has advanced in the past years with new developments deep learning. Newer architectures like Generative Adversarial Networks (GAN) can now produce ultra-realistic content with perceptually pleasing geometry and surface texture challenging the human perception. While this realistic content is very welcomed in entertainment sector, the same can pose severe threats to secure access control systems in biometric applications. Image manipulation attacks and video manipulation attacks have now evolved to a better position and can defeat the biometric systems. Another simple yet effective attack is the morphing attack which can compromise multiple identities using a single manipulated image. At the same time, approaches like Face2Face, Neural Textures and DeepFakes can also have a large negative impact on the digital information channels. The attack modes have made use of both traditional manipulation approaches and recent adversarial machine learning approaches (eg., GAN). Several governmental agencies have started to seek for reliable solutions to combat the challenge by funding projects like DARPA MediFor SAVI, DARPA ODIN BATL, and EU H2020 iMARS..
This Workshop On Manipulation, Adversarial and Presentation Attacks In Biometrics in WACV-2024 is being organized to report the advancements in creation, evaluation, impact and mitigation measures for adversarial attacks on biometrics systems. The workshop also targets submissions addressing the analyses and mitigation measures for function creep attacks. In this year's addition of the workshop, the scope will also consider the rising impact of synthetic realities and the mechanisms that they can affect biometric systems, as well as protection strategies to mitigate their adversarial use. This half-day workshop is a seventh edition of the special session held, previously held in conjunction with BTAS-2018, LA, USA, BTAS-2019, Tampa, FL, WACV 2020, WACV2022, WACV 2023, and WACV 2024, respectively.
Papers are invited to report on following topics, but not limited to:
Physical attacks on biometric systems.
Image manipulation attacks in biometrics verification and identification (e.g., PAD).
Video manipulation attacks.
Morphing attacks and detection
Generalizable attack detection algorithms
Forensic behavioral biometrics
Soft Biometrics cues for authenticity verification of biometric data
Multimedia forensics in biometrics
Integrity verification and authentication of digital content in biometrics.
Combination of multimodal decisions for authenticity verification in biometrics.
Function creep attacks effecting the privacy of biometric systems.
Human perception and decisions in biometric data authenticity verification
Ethical and societal implications of emerging manipulations.
Case studies based on the aforementioned topics.
Submission Guidelines:
Papers presented at the WACV workshops will be published as part of the "WACV Workshops Proceedings" and should, therefore, follow the same presentation guidelines as the main conference. Workshop papers will be included in IEEE Xplore, but will be indexed separately from the main conference papers. Paper submission guidelines of WACV can be accessed through this link.
For review, a complete paper should be submitted using the review format and the guidelines provided in the author kit. All reviews are double-blind, so please be careful not to include any identifying information including the authors’ names or affiliations.
Accepted papers will be allocated 8 pages in the proceedings. Please note that References/Bibliography at the end of the paper will NOT count toward the aforementioned page limit. That is, a paper can be up to 8 pages + the references.
The submission template can be downloaded (Overleaf template, ZIP Archive).
Please submit your papers under this CMT link.
Camera-ready Submission Guidelines:
To be announced by email
Important Dates
Workshop: The workshop will take place on WACV 2025- 28 February or 4 March 2025
Full Paper Submission: December 16, 2024 (23:59 PST) (EXTENDED - no further extensions)
Acceptance Notice: January 6, 2024 (23:59 PST)
Camera-Ready Paper: January 8, 2025 (23:59 PST)
Program:
4 March 2025 in Salon J.
Tentative program - subject to change - final program will be published on 8 February.
Chair: Raghavendra Ramachandra (Norwegian University of Science and Technology - NTNU)
13:00 - 13:50
🔹 Keynote Speech: Siwei Lyu: Rubber Hits the Road: Lessons Learned from DeepFake Detection in real-world.
🟢 Session 1: Deepfake Detection & Security (Chair: Raghavendra Ramachandra)
14:00 - 15:00
14:00 - 14:15: Extracting Local Information from Global Representations for Interpretable Deepfake Detection
Elahe Soltandoost, Richard Plesh, Stephanie Schuckers, Peter Peer, Vitomir Štruc
14:15 - 14:30: Wavelet-Driven Generalizable Framework for Deepfake Face Forgery Detection
Lalith Baru, Rohit Boddeda, Shilhora Akshay, Mohan Gajapaka
14:30 - 14:45: Face Detection and Recognition Under Real-World Scenarios – Dealing with Deepfake Incidents and Malicious Data Distortions
Ewelina Bartuzi-Trokielewicz, Alicja Martinek, Adrian Kordas
14:45 - 15:00: Transferable Adversarial Attacks on Audio Deepfake Detection
Muhammad Umar Farooq, Awais Khan, Kutub Uddin, Khalid Malik
☕ Coffee Break: 15:00 - 15:45
🔵 Session 2: Face Morphing & Attack Detection (Chair: Raghavendra Ramachandra)
15:45 - 16:30
15:45 - 16:00: Exploring ChatGPT for Face Presentation Attack Detection in Zero and Few-Shot In-Context Learning
Alain Komaty, Hatef Otroshi, Anjith George, Sébastien Marcel
16:00 - 16:15: MADation: Face Morphing Attack Detection with Foundation Models
Eduarda Caldeira, Guray Ozgur, Tahar Chettaoui, Marija Ivanovska, Peter Peer, Fadi Boutros, Vitomir Struc, Naser Damer
16:15 - 16:30: Metric for Evaluating Performance of Reference-Free Demorphing Methods
Nitish Shukla, Arun Ross
🔚 16:30 - Workshop Concludes
Invited speaker: Siwei Lyu, Ph.D
Title: Rubber Hits the Road: Lessons Learned from DeepFake Detection in real-world
Abstract: DF detection has been a growing field and has received much attention in the research community. We build a platform that bridges researched deepfake detection methods with real-world use, called DeepFake-o-meter. This process provides us with much information about how the detection methods are used and viewed by the users, exposing limitations, and suggesting some future directions.
About the speaker: Siwei Lyu is a SUNY Empire innovation professor of Computer Science and Engineering and the Center for Information Integrity Co-Director at the University at Buffalo. His main research interest is in Multimedia forensics, AI, and social cybersecurity. He obtained his Ph.D. from Dartmouth College under the supervision of Prof. Hany Farid. He is a Fellow of IEEE and IAPR.