Welcome to Face Anti-spoofing  Workshop and Challenge@CVPR2024

     Morning, Jun 17th, 2024 CVPR


5th Chalearn Face Anti-spoofing Workshop and Challenge@CVPR2024

Introduction

In recent years the security of face recognition systems has been increasingly threatened. Face Anti-spoofing (FAS) is essential to secure face recognition systems primarily from various attacks. In order to attract researchers and push forward the state of the art in Face Presentation Attack Detection (PAD), we organized four editions of Face Anti-spoofing Workshop and Competition at CVPR 2019, CVPR 2020, ICCV 2021, and CVPR 2023, which together have attracted more than 1200 teams from academia and industry, and greatly promoted the algorithms to overcome many challenging problems. In addition to physical presentation attacks (PAs), such as printing, replay, and 3D mask attacks, digital face forgery attacks (FAs) are still a threat that seriously endangers the security of face recognition systems. FAs aim to attack faces using digital editing at the pixel level, such as identity transformation, facial expression transformation, attribute editing, and facial synthesis. At present, detection algorithms for these two types of attacks, ``Face Anti-spoofing (FAS)" and ``Deep Fake/Forgery Detection (DeepFake)", are still being studied as independent computer vision tasks, and cannot achieve the functionality of a unified detection model to respond to both types of attacks simultaneously. To give continuity to our efforts in these relevant problems, we are proposing the 5th Face Anti-Spoofing Workshop@CVPR 2024. We analyze different types of attack clues as the main reason for the incompatibility between these two detection. The spoofing clues based on physical presentation attacks are usually caused by color distortion, screen moire patterns, and production traces. In contrast, the forgery clues based on digital editing attacks are usually changes in pixel values. The fifth competition aims to encourage the exploration of common characteristics in these two types of attack clues and promote the research of unified detection algorithms. Fully considering the above difficulties and challenges, we collect a Unified physical-digital Attack dataset, namely UniAttackData, for this fifth edition for algorithm design and competition promotion, including 1,800 participations with 2 and 12 physical and digital attacks, respectively, with a total of 28,706 videos. For more information about the UniAttackData dataset, please refer to [1]; 


Challenge website (Track 1: Unified Physical-Digital Face Attack Detection): [Link] 

Challenge website (Track 2: Snapshot Spectral Imaging Face Anti-spoofing):  [Link ]

Workshop Paper Submission Link:   AuthorGuidelines (8-page CVPR format Link) to the workshop, and the submission link [Link].  

Ref:

[1] Hao Fang, Ajian Liu, Haocheng Yuan, Junze Zheng, Dingheng Zeng, Yanhong Liu, Jiankang Deng, Sergio Escalera, Xiaoming Liu, Jun Wan, Zhen Lei. (2024). Unified Physical-Digital Face Attack Detection.  [Link]

Workshop Schedule (Pending)


Important Competition Dates (Pending):

Keynote Speakers:

Hong-Kai Xiong is a Cheung Kong Distinguished Professor at Shanghai Jiao Tong University (SJTU). In 2014, he received National Science Fund for Distinguished Young Scholar from Natural Science Foundation of China (NSFC). In 2017, he received the Science and Technology Innovative Leader Talent Award in Ten Thousand Talents Program. Currently, he is with both Dept. Electronic Engineering and Dept. Computer Science and Engineering. He served as the Vice Dean of Zhiyuan College in SJTU. He received the Ph.D. degree from SJTU in 2003. Since then, he has been with the Department of Electronic Engineering, SJTU. From 2007 to 2008, he was a Research Scholar in the Department of Electrical and Computer Engineering, Carnegie Mellon University (CMU), Pittsburgh, PA, USA. During 2011-2012, he was a Scientist with the Division of Biomedical Informatics at the University of California (UCSD), San Diego, CA, USA. His research interests mainly span multimedia signal processing, multimedia communication and networking, image and video coding, computer vision, biomedical informatics, and machine learning. He published over 300 refereed journal and conference papers, including about 90 IEEE/ACM Transactions/Journal papers and 110 high-ranked conference papers, e.g. ICML, ICLR, CVPR etc. He has been authorized with near 70 patents from China and the United States. He was the co-author of the TOP Paper Award of ACM Multimedia (ACM MM 2022). He has received numerous awards, including Shanghai Youth Science and Technology Distinguished Accomplishment Award, Shanghai Academic Research Leader Talent Award, the First Prize of the Shanghai Science and Technology Progress Award, the First Prize of Natural Science of The Chinese Institute of Electronics, the First Prize of the Shanghai Technological Innovation Award, etc. He has ever served as an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT). In addition, he has served as the Area Chair and technical program committee members for many prestigious academic conferences, such as IEEE CVPR, ACM MM, ICASSP, ISCAS, ICCV, and ICPR.

Karthik Nandakumar is an Associate Professor in the Computer Vision department at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). Prior to joining MBZUAI, he was a Research Staff Member at IBM Research – Singapore from 2014 to 2020 and a Scientist at Institute for Infocomm Research, A*STAR, Singapore from 2008 to 2014. He received his B.E. degree (2002) from Anna University, Chennai, India, M.S. degrees in Computer Science (2005) and Statistics (2007), and Ph.D. degree in Computer Science (2008) from Michigan State University, and M.Sc. degree in Management of Technology (2012) from National University of Singapore. His primary research interests include computer vision, machine learning, biometric recognition, applied cryptography, and blockchain. Specifically, he is interested in research on deep learning algorithms for biometrics and video surveillance applications as well as various security, privacy, and trust-related issues in machine learning. He has co-authored two books titled Introduction to Biometrics (Springer, 2011) and Handbook of Multibiometrics (Springer, 2006).

Karthik Nandakumar has received a number of awards including the 2008 Fitch H. Beach Outstanding Graduate Research Award from the College of Engineering at Michigan State University, the Best Paper award from the Pattern Recognition journal (2005), the Best Scientific Paper Award (Biometrics Track) at ICPR 2008, and the 2010 IEEE Signal Processing Society Young Author Best Paper Award. He is a Senior Area Editor of IEEE Transactions on Information Forensics and Security (T-IFS). He was also an Associate Editor of T-IFS from 2015 to 2019 and received the 2019 Outstanding Editorial Board Member award. He is also an Associate Editor for Elsevier journal on Pattern Recognition and a Distinguished Industry Speaker for the IEEE Signal Processing Society. In the recent past, he has served as Vice President for Education in the IEEE Biometrics Council and as an elected member of the IEEE Signal Processing Society Technical Committee on Information Forensics and Security. He is a senior member of the IEEE.

Xiang Xu is a Senior Applied Scientist at AWS AI Labs, specializing in identity and security research, with a focus on leading the product development of liveness detection technologies. Before joining Amazon, he obtained his Ph.D. from the University of Houston under the mentorship of Professor Ioannis A. Kakadiaris in 2019. With over a decade of experience in biometric research, Dr. Xu has contributed to advancements in detection, alignment, 3D face reconstruction, liveness detection, and recognition. His expertise extends to multi-modal computer vision, encompassing image and text retrieval, and domain adaptation. Currently, his research interests have expanded to include multi-modal foundational models, generative models, security of foundational models, and responsible AI. He and his team are committed to developing robust systems that counter presentation, deepfake, and adversarial attacks to protect the digital identity and maintain trust. Additionally, he has been a reviewer for several prestigious computer vision conferences, such as CVPR, ECCV, and ICCV, for many years 

Organizers:

Jun Wan (万军  Primary Contact), Institute of Automation, Chinese Academy of Sciences (CASIA), China, jun.wan@ia.ac.cn

Ajian Liu, Institute of Automation, Chinese Academy of Sciences (CASIA), China, ajian.liu@ia.ac.cn

Jiankang Deng, Insightface, jiankangdeng@gmail.com

Shengjin Wang, Tsinghua University, Beijing, wgsgj@tsinghua.edu.cn

Ya-Li Li, Tsinghua University, Beijing, liyali13@tsinghua.edu.cn

Sergio Escalera, Computer Vision Center (UAB) and University of Barcelona, Spain, sergio@maia.ub.es 

Hugo Jair Escalante, INAOE, ChaLearn, Mexico, hugojair@inaoep.mx

Isabelle Guyon, Université Paris-Saclay, France and ChaLearn, Berkeley, California, USA, guyon@chalearn.org

Zhen Lei, Institute of Automation, Chinese Academy of Sciences (CASIA), China, zlei@nlpr.ia.ac.cn

Organizers Institutions: