The rise of synthetic data generation has heightened the threat of deepfakes, which can be used unethically for manipulating public opinion, geopolitical tension, defamation, and identity theft. This threat is particularly concerning during this significant election season, as deepfake images, videos, and audio are already being circulated for political advantage. The continuous development of new generation techniques makes combating deepfakes increasingly difficult, creating an 'arms race' between attackers and defenders. To address this challenge, the Deepfake Face Detection In The Wild Competition (DFWild-Cup) aims to enhance the generalization of deepfake detectors by having participants develop systems that can distinguish between real and computer-generated facial images in diverse, real-world scenarios.
The challenge partially utilizes a collection of eight publicly available datasets from DeepfakeBench [1]. To test generalization, the test dataset may also include samples from newly generated datasets, with anonymized file names to prevent source identification.
The DFWild-Cup focuses on identifying whether images are real or fake, with real images being natural photos or snapshots from real videos, and fake images generated by various synthesis or forgery techniques or from computer-generated videos. Participants will receive training and test data sets, with the task of training a detector to score each test image, where a higher score indicates a real image and a lower score indicates a fake image. A training set comprising real and fake images from multiple datasets will be provided, and participants must justify any subset selection. After registration, participants will have access to the training, validation (development), and test (evaluation) sets, and they must submit final scores for the test images for challenge evaluation and ranking.
Latest! The evaluation phase of the competition is complete, and the top three teams have been selected for the final presentation at the 2025 Signal Processing Cup Event at ICASSP 2025 in Hyderabad. The finalist teams have already been notified via email.
Participants are requested to submit the evaluation set scores by January 27, 2025 (AoE).
December 15: Participants are requested to submit both the validation set scores and the technical report. Only those who provide a valid validation score and a complete technical report will receive access to the evaluation set.
July 1: The website is now live!
Challenge announcement/Registration starts: 5 July 2024
Release of training and validation set: 22 July 2024
Team Registration Deadline: 30 September 2024
Release of evaluation set: 15 December 2024
Final submission due: 15 January 2025
Finalists announcement: 31 January 2025
Presentation of final results at ICASSP 2025: April 6-11, 2025
(The challenge registration is closed!)
Thank you for your interest! Follow these two quick steps to register and get started with the challenge.
The baseline system for the challenge uses a variant of MesoNet [2] architecture.
The system shows and equal error rate (EER) of 15.64% on the validation set for the competition.
Competition Organizers (technical, competition-specific inquiries):
Md Sahidullah
Email: md.sahidullah@tcgcrest.org
SPS Staff (Terms & Conditions, Travel Grants, Prizes):
Jaqueline Rash, SPS Membership Program and Events Administrator
Email: Jaqueline.Rash@ieee.org
SPS Student Services Committee
Angshul Majumdar, Chair
Email: angshul@iiitd.ac.in
Md Sahidullah, TCG CREST, India
Ajinkya Kulkarni, IDIAP, Switzerland
Nauman Dawalatabad, MIT, USA
Ville Hautamäki, UEF, Finland
Tomi Kinnunen, UEF, Finland
Junichi Yamagishi, NII, Japan
[1] Yan, Z., Zhang, Y., Yuan, X., Lyu, S. and Wu, B., 2023, December. DeepfakeBench: a comprehensive benchmark of deepfake detection. In Proceedings of the 37th International Conference on Neural Information Processing Systems (pp. 4534-4565).
[2] Afchar, D., Nozick, V., Yamagishi, J. and Echizen, I., 2018, December. MesoNet: a compact facial video forgery detection network. In 2018 IEEE international workshop on information forensics and security (WIFS) (pp. 1-7). IEEE.
This competition is sponsored by the IEEE Signal Processing Society and MathWorks: