WACV2022 - Workshop on Manipulation, Adversarial, and Presentation Attacks in Biometrics
January 4, 2022 (Virtual)
Workshop on Manipulation, Adversarial and Presentation Attacks In Biometrics (MAP-A)
Synthetic content creation has advanced in the past years with new developments deep learning. Newer architectures like Generative Adversarial Networks (GAN) can now produce ultra-realistic content with perceptually pleasing geometry and surface texture challenging the human perception. While this realistic content is very welcomed in entertainment sector, the same can pose severe threats to secure access control systems in biometric applications. Image manipulation attacks and video manipulation attacks have now evolved to a better position and can defeat the biometric systems. Another simple yet effective attack is the morphing attack which can compromise multiple identities using a single manipulated image. At the same time, approaches like Face2Face, Neural Textures and DeepFakes can also have a large negative impact on the digital information channels. The attack modes have made use of both traditional manipulation approaches and recent adversarial machine learning approaches (eg., GAN). Several governmental agencies have started to seek for reliable solutions to combat the challenge by funding projects like DARPA MediFor SAVI and DARPA ODIN BATL.
This Workshop On Manipulation, Adversarial and Presentation Attacks In Biometrics in WACV-2022 is being organized to report the advancements in creation, evaluation, impact and mitigation measures for adversarial attacks on biometrics systems. The workshop also targets submissions addressing the analyses and mitigation measures for function creep attacks. This half-day workshop is a fourth edition of the special session held, previously held in conjunction with BTAS-2018, LA, USA, BTAS-2019, Tampa, FL, and WACV 2020, Snowmass, USA respectively.
Papers are invited to report on following topics, but not limited to:
Physical attacks on biometric systems.
Image manipulation attacks in biometrics verification and identification (e.g., PAD).
Video manipulation attacks.
Morphing attacks and detection
Generalizable attack detection algorithms
Forensic behavioral biometrics
Soft Biometrics cues for authenticity verification of biometric data
Multimedia forensics in biometrics
Integrity verification and authentication of digital content in biometrics.
Combination of multimodal decisions for authenticity verification in biometrics.
Function creep attacks effecting the privacy of biometric systems.
Human perception and decisions in biometric data authenticity verification
Ethical and societal implications of emerging manipulations.
Case studies based on the aforementioned topics.
Papers presented at WACV workshops will be published as part of the "WACV Workshop Proceedings" and should, therefore, follow the same guideline as the main conference. Workshop papers will be included in IEEE Xplore, but will be indexed separately from the main conference papers. Paper submission guidlines of WACV can be accessed through this link.
For review, a complete paper should be submitted using the for_review format and the guidelines provided in the author kit. All reviews are double-blind, so please be careful not to include any identifying information including the authors’ names or affiliations.
Accepted papers will be allocated 8 pages in the proceedings. Please note that References/Bibliography at the end of the paper will NOT count toward the aforementioned page limit. That is, a paper can be up to 8 pages + the references.
Please submit your papers under this CMT link.
Camera-ready Submission Guidelines:
announced by email
Workshop: The workshop will take place on WACV 2022- January 4 or 8 (to be decided), 2022
Full Paper Submission:
11th3rd November, 2021 (23:59 PST) (extended - no further extension)
12th17th November, 2021 (23:59 PST)
15th19th November, 2021 (23:59 PST) (extended - no further extension)
The workshop will be held on January 4, 2022.
Because of the COVID19 situation worldwide, we had to make a difficult decision about running the workshop in a remote fashion. Therefore, the MAP-A workshop at WACV 2022 will be run remotely.
The program times are in HST (Hawaii Standard Time)
9:00 AM Opening session
9:10 AM Keynote Talk: Wael AbdAlmageed (USC Information Sciences Institute)
Title: Biometrics Under Attack – Where Do We Go From Here?
10:10 AM Oral Session I
10:10 AM - Synthesizing Face Images from Match Scores. Authors: Thomas Swearingen (Michigan State University); Arun Ross (Michigan State University)
10:30 AM - Powerful Physical Adversarial Examples Against Practical Face Recognition Systems. Authors: Inderjeet Singh (NEC); Toshinori Araki (NEC); Kazuya Kakizaki (NEC Corporation)
10:50 AM - Morph Detection Enhanced by Structured Group Sparsity. Authors: Poorya Aghdaie (West Virginia University); Baaria A Chaudhary (West Virginia University); Sobhan Soleymani (West Virginia University); Jeremy Dawson (West Virginia University); Nasser Nasrabadi (West Virginia University)
11:10 AM Coffee break
11:20 AM Keynote Talk: Luisa Verdoliva (University Federico II of Naples, Italy)
Title: Deepfake detection: state-of-the-art and future directions
12:20 PM Oral Session II
12:20 PM - OTB-morph: One-Time Biometrics via Morphing applied to Face Templates. Authors: Mahdi Ghafourian (Universidad Autónoma de Madrid)*; Julian Fierrez (Universidad Autonoma de Madrid); Ruben Vera-Rodriguez (Universidad Autónoma de Madrid); Ignacio Serna (Universidad Autonoma de Madrid); Aythami Morales (Universidad Autonoma de Madrid)
12:40 PM - Saliency-Guided Textured Contact Lens-Aware Iris Recognition. Authors: Lucas Parzianello (University of Notre Dame); Adam Czajka (University of Notre Dame)
1:00 PM - A Personalized Benchmark for Face Anti-spoofing. Authors: Davide Belli (Qualcomm AI Research)*; Debasmit Das (Qualcomm); Bence Major (Qualcomm AI Research); Fatih Porikli (Qualcomm AI Research)
1:20 PM Closing Session
1:30 PM End of workshop
Deepfake detection: state-of-the-art and future directions
Talk abstract: In recent years there have been astonishing advances in AI-based synthetic media generation. Thanks to deep learning-based approaches it is now possible to generate data with a high level of realism. While this opens up new opportunities for the entertainment industry, it simultaneously undermines the reliability of multimedia content and supports the spread of false or manipulated information on the Internet. This is especially true for human faces, allowing to easily create new identities or change only some specific attributes of a real face in a video, so-called deepfakes. In this context, it is important to develop automated tools to detect manipulated media in a reliable and timely manner. This talk will describe the most reliable deep learning-based approaches for detecting deepfakes, with a focus on those that enable domain generalization. The results will be presented on challenging datasets with reference to realistic scenarios, such as the dissemination of manipulated images and videos on social networks. Finally, new possible directions will be outlined.
Bio: Luisa Verdoliva is Associate Professor at University Federico II of Naples, Italy, where she leads the Multimedia Forensics Lab. In 2018 she has been visiting professor at Friedrich-Alexander-University (FAU) and in 2019-2020 she has been visiting scientist at Google AI in San Francisco. Her scientific interests are in the field of image and video processing, with main contributions in the area of multimedia forensics. She has published over 120 academic publications, including 45 journal papers. She is the PI for University Federico II of Naples in the DISCOVER (a Data-driven Integrated Approach for Semantic Inconsistencies Verification) project funded by DARPA under the SEMAFOR program (2020-2024). She has actively contributed to the academic community through service as technical Chair of the 2019 IEEE Workshop in Information Forensics and Security and the 2021 ACM Workshop on Information Hiding and Multimedia Security, area Chair of IEEE ICIP since 2017. She has been also co-Chair of the IEEE CVPR Media Forensics Workshop both in 2020 and 2021. She is on the Editorial Board of IEEE Transactions on Information Forensics and Security and IEEE Signal Processing Letters and has been Guest Editor for IEEE Journal of Selected Topics in Signal Processing. Dr. Verdoliva is Chair of the IEEE Information Forensics and Security Technical Committee and vice-Chair of the EURASIP Signal and Data Analytics for Machine Learning Special Area Teams. She is the recipient of the 2018 Google Faculty Award for Machine Perception and a TUM-IAS Hans Fischer Senior Fellowship (2020-2023). She has been elected to the grade of IEEE Fellow since January 2021.
Biometrics Under Attack – Where Do We Go From Here?
Talk abstract: Biometrics (face, iris and fingerprint) have recently become the ubiquitous authentication and access control method, all the way from personal devices, such as smart phones and laptops, to airport security (e.g. CLEAR and IDEMIA). However, these biometric systems present a wide attack surface with multiple opportunities for adversaries to use (physical) presentation attacks as well as digital attacks, exploiting synthetic and manipulated biometrics, such as deepfakes. In the first part of the talk, I will briefly discuss the vulnerabilities of biometric systems, and the challenges of ensuring the security of these systems, in the age of deep learning. I will then discuss state-of-the-art approaches for detecting physical and digital attacks, including unknown or yet-to-be-seen attacks. In the second part of the talk, I will wrap up with conclusions and thoughts for future directions for biometric security and ethical artificial intelligence in general.
Bio: Dr. AbdAlmageed is a Research Associate Professor at the Department of Electrical and Computer Engineering and a Research Director at the Information Sciences Institute, both units of USC Viterbi School of Engineering. From 2004 to 2013, he was a research scientist with the University of Maryland at College Park. He earned his B.S. in electrical engineering in 1994 and his M.S. in computer engineering in 1997 from Mansoura University in Egypt. He also earned a graduate software engineering diploma in 1997 from the Information Technology Institute in Egypt via a scholarship granted to distinguished graduates from Egyptian universities. In 2003, he earned his Ph.D. with Distinction in computer engineering from the University of New Mexico, where he was also awarded the Outstanding Graduate Student award. His research interests include representation learning, debiasing and fair representations, multimedia forensics and visual misinformation identification (such as deepfake and image manipulation detection) and face recognition and biometric anti-spoofing. Dr. AbdAlmageed leads several multi-institution research efforts, including DARPA’s MediFor, GARD and LwLL and IARPA’s Janus, Odin and BRIAR. He has over 90 publications in top computer vision, machine learning and biometrics conferences and journals, including CVPR, NeurIPS, ICCV, ECCV, ACM MM, PAMI, TBIOM and ICB. Dr. AbdAlmageed is the recipient of the 2019 USC Information Sciences Institute Achievement Award. His research has also been featured in Forbes, Glamour, Fox News, Time For Kids and PCMag.