1st International Workshop on Synthetic Data for Face and Gesture Analysis
Held in conjunction with IEEE FG 2024
Istanbul, Turkey, 27 May 2024
1st International Workshop on
Synthetic Data for Face and Gesture Analysis (SD-FGA)
About the workshop
The SD-FGA workshop is organized within the scope of IEEE FG 2024 and all accepted and presented papers will be published within the main "IEEE FG 2024 Proceedings" that will appear on IEEE Xplore. The workshop will be held in Istanbul, Turkey, on 27 May 2024 (morning session). The workshop is co-organized by the ARIS funded project "DeepFake DAD".
Motivation
Recent advancements in generative models within the realms of computer vision and artificial intelligence have revolutionized the way researchers approach data-driven tasks. The advent of sophisticated generative models, such as GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), or more recently diffusion models, has empowered practitioners to create synthetic data that closely mirrors real-world scenarios. These models enable the generation of high-fidelity images and sequences, laying the foundation for groundbreaking applications in face and gesture analysis. The significance of these generative models lies in their ability to produce synthetic data that is remarkably realistic, thereby mitigating challenges associated with data scarcity and privacy concerns. As a result, the utilization of synthetic data has become increasingly prevalent in various research domains, offering a versatile and ethical alternative for training and testing machine learning algorithms.
This workshop aims to delve into the diverse applications of synthetic data in the realm of face and gesture analysis. Participants will explore how synthetic datasets have been instrumental in training facial recognition systems, enhancing emotion detection models, and refining gesture recognition algorithms. The workshop will showcase exemplary use cases where the integration of synthetic data has not only overcome data limitations but has also fostered the development of more robust and accurate models. As researchers increasingly recognize the potential of synthetic datasets in shaping the future of computer vision and machine learning, there arises a demand for a collaborative platform where ideas can be exchanged, methodologies shared, and challenges addressed. This workshop aims to bridge the gap between theoretical knowledge and practical implementation, fostering a community of experts and enthusiasts dedicated to advancing the frontiers of synthetic data in face and gesture analysis.
Topics of interest include, but are not limited to:
• Novel generative models for face and gesture synthesis
• Label generation for synthetic data
• Information leakage in synthetics data
• Data factories for training biometric (detection, landmarking, recognition) models
• Synthetic data for data augmentation
• Data synthesis for bias mitigation and fairness
• Quality assessment for synthetic data
• Synthetic data for privacy protection
• Novel applications of synthetic data
• New synthetic datasets and performance benchmarks
• Applications of synthetic data, e.g., deepfakes, virtual try-on, face and gesture editing
Paper submission
Submissions to SD-FGA can be up to 8-pages and with an unlimited number of references, similar to the main conference. For paper formatting, please follow the instructions posted on the main IEEE FG 2024. The paper templates are for convenience, posted here as well:
Kindly note that we are using a CMT instance for paper collection that is distinct from the CMT instance of the main conference. To submit your paper, use the following URL:
https://cmt3.research.microsoft.com/SDFGA2024
The reviewing process will be “double blind” and the submitted papers should, therefore, be appropriately anonymized not to reveal the authors or authors’ institutions. The final decisions will be rendered by the workshop organizers and will take into account the review content as well as the decision recommendations made by the Technical Program Committee members. Borderline paper will be discussed by the organizers.
Important Dates
Paper submission (extended to): March 25, 2024, 11:59pm PST
Notifications: April 15, 2024
Camera ready: April 22, 2024 (same as main conference)
Workshop dates: 27 May 2024 (morning workshop)
Invited Speakers
The first invited keynote talk for the workshop will be given by Prof. Rama Chellappa, John Hopkins University, USA. The second invites talk is a raising star keynote, given by Hatef Otroshi Shahreza, from École Polytechnique Fédérale de Lausanne (EPFL)
Prof. Rama Chellappa is a Bloomberg Distinguished Professor Computer Vision and Artificial Intelligence in the Departments of Electrical and Computer Engineering and Biomedical Engineering at Johns Hopkins University (JHU). His research interests are in artificial intelligence, computer vision, machine learning and pattern recognition. He received the 2012 K. S. Fu Prize from the International Association of Pattern Recognition (IAPR). He is a recipient of the Society, Technical Achievement, and Meritorious Service Awards from the IEEE Signal Processing Society, the Technical Achievement and Meritorious Service Awards from the IEEE Computer Society and the Inaugural Leadership Award from the IEEE Biometrics Council. He received the 2020 IEEE Jack S. Kilby Medal for Signal Processing, the 2023 IEEE Computer Society Pattern Analysis and Machine Intelligence Distinguished Researcher Award, and the 2024 Edwin H. Land Medal from Optica (formerly Optical Society of America). He is an elected member of the National Academy of Engineering. He has been recognized as a Distinguished Alumni by the ECE department at Purdue University and the Indian Institute of Science. He is a Fellow of AAAI, AAAS, ACM, AIMBE, IAPR, IEEE, NAI, and OSA and holds nine patents.
Keynote talk: The Promises and Perils of Relying on Synthetic Data for Face and Gesture Analysis
Talk summary: While synthetic data grounded in Physics and geometry has been used for computer vision applications for more than three decades, the use of synthetic data from generative models is being increasingly used for training computer vision systems. In this talk, I will discuss the promises (unlimited training data, privacy, ability to handle domain shifts, etc) and perils (hallucination, adversarial attacks, madcow phenomenon) of using synthetic data for computer vision tasks in general and face and gesture recognition in particular.
Hatef Otroshi Shahreza is currently working toward the PhD degree with the École Polytechnique Fédérale de Lausanne (EPFL) and is a research assistant with the Biometrics Security and Privacy Group, Idiap Research Institute, Switzerland, where he received H2020 Marie Skłodowska-Curie Fellowship (TReSPAsS-ETN) for his doctoral program. During his PhD, he also experienced six months as a visiting scholar with the Biometrics and Internet Security Research Group, Hochschule Darmstadt, Germany. He is also the winner of the European Association for Biometrics (EAB) Research Award 2023. His research interests include deep learning, computer vision, generative models, and biometrics. He is a member (coordinator) of the organising team of the Synthetic Data for Face Recognition (SDFR) Competition at FG 2024. He also has been actively contributing as a reviewer for different conferences and journals (such as ICML, ECCV, IEEE-TIFS, etc.).
Keynote talk: Synthetic Data for Face Recognition
Talk summary: State-of-the-art face recognition models are trained on large-scale datasets, collected by crawling the Internet and without individuals' consent, raising legal, ethical, and privacy concerns. Recently, the use of synthetic data to complement or replace real data for the training of face recognition models has become a promising solution. In particular, the recent advancement in generative models provides powerful tools to generate face images. However, generating face recognition datasets with sufficient inter-class and intra-class variations is still a challenging task. In this talk, I review the recent works on generating synthetic datasets and different approaches for training face recognition models based on synthetic data. Furthermore, I discuss the challenges in the existing methods and outline potential future directions.
Workshop Program
9:00 - 9:05: Opening Session
9:05 - 10:00: Keynote talk
Title: The Promises and Perils of Relying on Synthetic Data for Face and Gesture Analysis
Speaker: Prof. Rama Chellappa
Session Chair: Vitomir Štruc
10:00 - 10:45: Session 1 - Applications of Synthetic Data
(Presentation format: 15 minutes = talks + QA, Session Chair: Naser Damer)
A Study of Video-based Human Representation for American Sign Language Alphabet Generation; Fei Xu; Lipisha Chaudhary; Lu Dong; Srirangaraj Setlur; Venu Govindaraju; Ifeoma Nwogu
Training Against Disguises: Addressing and Mitigating Bias in Facial Emotion Recognition with Synthetic Data; AAdith Sukumar; Aditya Desai; Peeyush Singhal ; Sai Gokhale; Deepak Kumar Jain; Rahee Walambe; Ketan V Kotecha
DiCTI: Diffusion-based Clothing Designer via Text-guided Input; Ajda Lampe; Julija Stopar; Deepak Kumar Jain; Shinichiro Omachi; Peter Peer; Vitomir Štruc
10:45 - 11:00: Coffee Break
11:00 - 12:00 Keynote talk
Title: Synthetic Data for Face Recognition
Speaker: Hatef Otroshi Shahreza
Session Chair: Vitomir Štruc
12:00 - 13:15: Session 2 - Generation and Detection of Synthetic Data
(Presentation format: 15 minutes = talks + QA, Session Chair: Peter Rot)
Towards Inclusive Face Recognition Through Synthetic Ethnicity Alteration; Praveen Kumar Chandaliya; Kiran Raja; Raghavendra Ramachandra; Zahid Akhtar; Christoph Busch
Massively Annotated Datasets for Assessment of Synthetic and Real Data in Face Recognition; Pedro C. Neto; Rafael M Mamede; Carolina Albuquerque; Tiago FS Gonçalves; Ana F. Sequeira
Analyzing the Feature Extractor Networks for Face Image Synthesis; Erdi Sarıtaş; Hazim Kemal Ekenel
INDIFACE: Illuminating India’s Deepfake Landscape with a Comprehensive Dataset; Kartik Kuckreja; Ximi Hoque; Nishit Nilesh Poddar; Shukesh G Reddy; Abhinav Dhall; Abhijit Das
Real, fake and synthetic faces - does the coin have three sides? Shahzeb Naeem; Ramzi Al-Sharawi; Muhammad Riyyan Khan; Usman Tariq*; Abhinav Dhall; Hasan Al-Nashash
13:15 - 13:20 Closing session
Supported by
ARIS Research project DeepFake DAD.
Organizers
Fadi Boutrus, Fraunhofer IGD, Germany
Naser Damer, Fraunhofer IGD, Germany
Deepak Kumar Jain, Dalian University of Technology, Dalian, China
Pourya Shamsolmoali, East China Normal University, Shanghai
Vitomir Štruc. University of Ljubljana, Slovenia
Supporters and Co-organizers
Technical Program Committee
Adam Czajka, University of Notre Dame, USA
Akshi Kumar, Goldsmiths University of London, UK
Anderson Rocha, University of Campinas, Brasil
Andreas Uhl, Salzburg University, Austria
Bo Peng, Institute of Automation, Chinese Academy of Sciences, China
Chenquan Gan, Chongqing University of Posts and Telecommunications, China
Darian Tomašević, University of Ljubljana, Slovenia
Jie, Zhang, ICT, Chinese Academy of Sciences, China
Mirko Marras, University of Cagliari, Italy
Patrick Flynn, University of Notre Dame, USA
Peter Peer, University of Ljubljana, Slovenia
Rama Chellappa, Johns Hopkins University, USA
Ruben Vera-Rodriguez, Universidad Autónoma de Madrid, Spain
Sandipan Banerjee, Samsung Research America, USA
Victor Sanchez, University of Warwick, UK
Zhang Zhang, Institute of Automation, Chinese Academy of Sciences, China
Zhen Jia, Institute of Automation, Chinese Academy of Sciences, China