2nd International Workshop on Synthetic Data for Face and Gesture Analysis
Held in conjunction with IEEE FG 2025
Clearwater, Florida, USA, 26 May 2025 (afternoon)
2nd International Workshop on Synthetic Data for Face and Gesture Analysis
Held in conjunction with IEEE FG 2025
Clearwater, Florida, USA, 26 May 2025 (afternoon)
2nd International Workshop on
Synthetic Data for Face and Gesture Analysis (SD-FGA)
About the workshop
The SD-FGA workshop is organized within the scope of IEEE FG 2025 and all accepted and presented papers will be published within the main "IEEE FG 2025 Proceedings" that will appear on IEEE Xplore. The workshop will be held in Clearwater, Florida, USA, on 26 May 2025 in the afternoon. The workshop is co-organized by the ARIS funded project "DeepFake DAD".
Motivation
Novel generative techniques for producing realistic face and gesture data
Innovative approaches for labeling and annotating synthetic data
Methods for preventing data leakage in synthetic datasets
Development of synthetic data pipelines for biometrics
Techniques for using synthetic data to enrich and augment existing datasets
Synthetic data as a tool for bias reduction and promoting fairness in face and gesture analysis
Criteria and methodologies for assessing the quality of synthetic datasets
Privacy-focused synthetic data generation for sensitive applications
New applications for synthetic data in areas like augmented reality, animation, and virtual environments
Comparative performance benchmarks and quality assessments of synthetic datasets
Submissions to SD-FGA can be up to 8-pages and with an unlimited number of references, similar to the main conference. For paper formatting, please follow the instructions posted on the main IEEE FG 2025. The paper templates are for convenience, posted here as well:
Kindly note that we are using a CMT instance for paper collection that is distinct from the CMT instance of the main conference. To submit your paper, use the following URL:
https://cmt3.research.microsoft.com/SDFGA2025
The reviewing process will be “double blind” and the submitted papers should, therefore, be appropriately anonymized not to reveal the authors or authors’ institutions. The final decisions will be rendered by the workshop organizers and will take into account the review content as well as the decision recommendations made by the Technical Program Committee members. Borderline paper will be discussed by the organizers.
Paper submission (extended to): April 17, 2025, 11:59pm PST (firm)
Notifications: April 29, 2025
Camera ready: May 2, 2025 (same as main conference)
Workshop date: 26 May 2025 (afternoon)
Dr. Adam Czajka is an Associate Professor in the Department of Computer Science and Engineering at the University of Notre Dame, and the Director of the AI Trust and Reliability (AITAR) Lab, with over 25 years of professional experience in biometrics, security, and machine learning. His primary research interests focus on the reliability of biometric recognition, recently emphasizing modern artificial intelligence methods. Czajka is generally fascinated by a wide range of research in computer vision and machine learning, as well as the non-obvious intersections with psychology, medical sciences, and art, which often involve high-risk but high-reward outcomes. His research has been funded by the US National Science Foundation (NSF CAREER award), US Department of Defense, US National Institute of Justice, FBI Biometric Center of Excellence, NIST, IARPA, US Army, European Commission, Polish Ministry of Higher Education, and various companies.
Dr. Lijun Yin is a SUNY Distinguished Professor of Computer Science, Director of research center for Imaging, Acoustics, and Perception Science (CIAPS), Director of Graphics and Image Computing Laboratory, and Co-Director of the Seymour Kunis Media Core of T. J Watson College of Engineering and Applied Science of Binghamton University, State University of New York. His research contributes on development of computational methods in computer vision, graphics, human computer interaction for human behavior modeling, analysis, and understanding, with over 170 publications and 10 patents. His 2D/3D/4D facial expression databases and multimodal data have been widely used in both academia and industry. Dr. Yin received the Lois B. DeFleur Faculty Prize for Academic Achievement Award, James Watson Investigator Award of NYSTAR, and SUNY Chancellor's Award for Excellence in Scholarship & Creative Activities. He has served as a General Co-Chair of FG2025, and Program Co-Chair of FG2013 and FG2018, and on editorial board of journals of Image and Vision Computing and PRL. He is a Fellow of IEEE.
Dr. Chen Chen is an Associate Professor at the Center for Research in Computer Vision at the University of Central Florida (UCF). He earned his Ph.D. in Electrical Engineering from the University of Texas at Dallas in 2016, where he was honored with the David Daniel Fellowship for the Best Doctoral Dissertation. His research interests span computer vision, efficient deep learning, multimodal learning, and federated learning. Actively involved in NSF and industry-sponsored research projects, Dr. Chen focuses on developing efficient, resource-aware machine vision algorithms and systems for extensive camera networks. His work is also supported by notable agencies including IARPA and NIFA. Dr. Chen serves as an Associate Editor for the IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT), the Journal of Real-Time Image Processing, and the IEEE Journal on Miniaturization for Air and Space Systems. He has also taken on the role of area chair for several conferences, such as ECCV 2022, CVPR 2022-2024, ICCV 2025, and ACM Multimedia from 2019 to 2024. His scholarly contributions are highlighted by his Google Scholar metrics, with more than 24,000 citations and an h-index of 75. For more information, please visit https://www.crcv.ucf.edu/chenchen/
14:00 - 14:05: Opening Session
14:10 - 15:00: Keynote talk 1
Title: Augment or Leak? Exploring the Dual Nature of Synthetic Biometric Data
Speaker: Adam Czajka, Notre Dame University
15:00 - 16:00: Oral Session - Synthetic Data for Facial Analysis
(Presentation format: 20 minutes = talks + QA)
Synthetic Faces, Real Gains: Improving Age and Gender Classification through Generative Data; Nuno Freitas, Andreia Costa, João Tremoço, Miguel Lourenço
Towards ML-based Assessment of Synthetic Characters Heads; Igor Borovikov, Karine Levonyan, Panda Elliott; Etienne Danvoye
My Emotion on Your Face: The Use of Facial Keypoint Detection to Preserve Emotions in Latent Space Editing; Jingrui He; Stephen Mcgough
16:00 - 16:30: Coffee Break
16:30 - 17:15 Keynote talk 2
Title: Enhancing Face Analysis in Low-Quality Images for Identification, Restoration, and 3D Multiview Generation
Speaker: Lijun Yin, Binghamton University
17:15 - 18:00 Keynote talk 3
Title: Enhancing Controllable Image Generation with Efficient Consistency Feedback
Speaker: Chen Chen, University of Central Florida
18:00 - 18:05 Closing session
Fadi Boutrus, Fraunhofer IGD, Germany
Xilin Chen, Institute of Computing, Chinese Academy of Sciences, Beijing, China
Naser Damer, Fraunhofer IGD, Germany
Deepak Kumar Jain, Dalian University of Technology, Dalian, China
Vitomir Štruc. University of Ljubljana, Slovenia
Bo Peng, Institute of Automation, Chinese Academy of Sciences
Chenquan Gan, Chongqing University of Posts and Telecommunications
Darian Tomašević, University of Ljubljana
Da-Wen Huang, Sichuan Normal University
Hu Han, Institute of Computing Technology, Chinese Academy of Sciences
Marco Huber, Fraunhofer IGD
Mirko Marras, University of Cagliari
Muyu Li, Dalian University of Technology
Patrick Flynn, University of Notre Dame
Peter Peer, University of Ljubljana
Rama Chellappa, Johns Hopkins University
Sandipan Banerjee, Pipio
Srirangaraj Setlur, University at Buffalo, SUNY
Sumarga Sah Tyagi, University of South Florida
Victor Sanchez, University of Warwick
Zhen Jia, Institute of Automation, Chinese Academy of Sciences
Disclaimer
The Microsoft CMT service was used for managing the peer-reviewing process for this workshop. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.