Track: Improving Fairness Generalization in AI Face Detection
Real-world Scenario: The AI-Face dataset (https://arxiv.org/abs/2406.00783) contains laboratory-generated AI faces, as most of these images were created for research purposes, such as designing novel detectors or benchmarking existing ones. However, relying solely on this dataset limits the ability to model real-world deepfakes. Real-world deepfakes are often generated by unknown actors using either single or mixed generative models, resulting in forgery types that may not have been observed during training. Additionally, the resolution and quality of faces encountered in the test set may differ significantly from those in the training set, further challenging the generalization capabilities of the developed models.
Justify the Problem: In this case, a detector trained on the provided training set, even if designed with fairness constraints in mind, may fail to maintain its fairness properties when deployed in unseen scenarios. Although this presents a significant challenge, it has not been solved sufficiently. We are currently in an era of rapid advancements in generative AI, with the continual release of powerful models such as Google Veo. These models can be readily misused by deepfake makers to create realistic AI-generated faces, exacerbating societal risks. The dynamic nature of generative technologies creates a persistent cat-and-mouse game between deepfake generation and detection, highlighting the urgent need for fairness solutions that generalize effectively to novel threats.
Therefore, this track aims to encourage participants to develop more advanced, generalizable fair deepfake detectors, with the goal of creating potential solutions to mitigate these emerging challenges.