Workshop on Unmasking (Truly) Deepfakes: Not only Video Deepfakes at IEEE Conference on AI 2025 Link1 Link2
आईईईई सम्मेलन पर एआई 2025 में अनमास्किंग (वास्तव में) डीपफेक्स पर कार्यशाला: न केवल वीडियो डीपफेक्स
Title: Face Recognition Progression: Synthetic Images to Vulnerabilities – Half Day
Organizers: Akshay Agarwal (IISER Bhopal) and Chaitanya Roygaga (Lehigh University)
Abstract: Face recognition is one of the most commonly used technologies in the world, which involves finding and recognizing the identities of faces from images or videos by comparing them against an already available database of faces. The distribution of faces used for training recognition models might be different from each other, especially those with a degree of noise (corrupted) or those not captured in the physical environment (synthetic). The complexity of clean physical world images hinders effective feature learning. At a broad level, face swapping, morphing, and deepfakes perform similar operations on the face images, but yield drastically different attack success rates, not only in fooling face recognition models, but also in attack detection algorithms. Further, the quality of face images and size of face images can significantly impact the features learned by the intermediate deep network layers. A model’s final performance, or the softmax output, could be a result of a combination of classifications (or misclassifications) across its various layers. This tutorial aims to shed light on these intriguing phenomena, which might improve model explainability, by observing and describing the classification performance through the network’s layers. With an increase in data noise, deeper networks are generally preferred for a more robust model. We present a framework that allows feature visualization of input faces at multiple levels of a selected model, describing their correct (or incorrect) classification.
Special Session in IJCB 2024: Face Recognition in the Era of Synthetic Images and Its Boundless Vulnerabilities (SIBV-SS)
The vulnerabilities of face recognition algorithms are limitless; hence, this special session covers a wide range of topics that highlight the positive and negative aspects of the factors affecting face recognition. The topics include deepfake, the use of synthetic media for privacy-preserving learning, facial attribute anonymization, adversarial attacks, morphing, and presentation attacks. Face recognition has been proven to be one of the most effective for establishing identities; however, the malicious purposes of intruders and the advancement of automated technologies have led to the development of several anomalies that can trick the system. However, the literature rarely describes these different anomalies under one roof, which limits the understanding of the functioning of the different anomalies or features that might not be adversaries but are used as adversaries due to poor network learning. This session aims to provide a comprehensive understanding of the success of face recognition algorithms and how different factors contribute to their success, such as synthetic images or failure, such as adversarial attacks. We assert that due to the involvement of a significant inter-disciplinary concept, the proposal can help in understanding face recognition at a top level. For example, the generation of deepfakes and adversarial attacks is significantly different, but in the end, they are manipulating the deep-level features of deep face recognition. Understanding how these factors are working can help us in developing a universally robust deep face recognition. The proposed special session is critical and highly relevant to the audience of the main conference; therefore, we request the community to actively take part and submit their high-quality papers to understand and protect the integrity of deep face recognition networks.
Website: https://sites.google.com/iiserb.ac.in/ijcb24-sibv-ss/home?authuser=0