SynRDinBAS: Synthetic Realities and Data in Biometric Analysis and Security
March, Tucson, Arizona, US
March, Tucson, Arizona, US
Recent advancements in generative models like GANs, VAEs, and diffusion models have transformed data-driven tasks in computer vision and AI by enabling the creation of highly realistic synthetic data. These models address data scarcity challenges, offering versatile and ethical alternatives for training and testing machine learning algorithms. However, their realism also raises concerns, as synthetic data's indistinguishability from real data poses risks of misuse, manipulation, and potential harm when used unethically.
The Synthetic Realities and Data in Biometric Analysis and Security (SynRDinBAS) Workshop & Challenge is organized at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2026.
Workshop
The workshop aims to explore the diverse applications of synthetic realities and data in biometric analysis, while also addressing critical security issues such as data privacy and ethical concerns of data manipulation.
Topics of interest include, but are not limited to:
Novel generative models for synthesis of biometric data
Synthetic realities in immersive media for biometric interaction and analysis
Synthetic realities for behavioral biometric data collection and analysis
Label generation for synthetic data
Information leakage in synthetic data
Data factories for training biometric (detection, landmarking, recognition) models
Synthetic data for data augmentation
Data synthesis for bias mitigation and fairness
Quality assessment for synthetic data
Synthetic data for privacy protection
Novel applications of synthetic data
New synthetic datasets and performance benchmarks
Applications of synthetic data, e.g., deepfakes, virtual try-on, face and gesture editing
Partially or fully synthetically generated attacks on biometric systems for identification and verification
Detection of manipulated and synthetic content
Forensic analysis of synthetic data
Find details about paper submission HERE.
Challenge
The challenge will focus on bridging the research gap associated with the detection of partially synthetic data, as localized changes (such as adding or removing objects or subtly altering faces) are more difficult to detect and more likely to deceive viewers.
Target tasks include, but are not limited to:
Detection of images generated with SOTA models
Detection and localization of object additions/removals
Identification of in-painted or altered regions
Image classification as fully synthetic, partially synthetic or pristine
By anchoring the challenge in more subtle forms of manipulation, we aim to stimulate new methods that move beyond binary classification toward a fine-grained understanding of image authenticity. New datasets will be generated specifically for this effort, with a substantial portion synthetically manipulated at varying levels of granularity, and annotated for both detection and localization.
A submission platform will be dedicated to the competition. Submissions will be assessed using a combination of standard detection metrics (accuracy, balanced accuracy, AUC) and localization-specific metrics (such as IoU and pixel-level F1 scores).
Note: To reward innovation, top performers may be eligible for research grants of up to $250,000 and travel stipends will be provided for invited teams.