Submission Requirements
Only one member from each team is required to register and fill out a Google Form before submitting. The demographic labels will be sent to you after 7/2/2025.
The competition is hosted on Codabench. The track detail can be found in the page of Track. Please follow the guidelines and submit your results on Codabench.
Data & Baseline & Code
The training and validation sets are from the AI-Face dataset. The dataset comprises a total of 1,245,660 AI-generated face images produced by 37 different generation methods, along with 400,885 real face images. Each image is annotated with demographic attributes, all of which were inferred using automated annotation tools. Further details are available in https://arxiv.org/pdf/2406.00783. The dataset can be found in https://github.com/Purdue-M2/AI-Face-FairnessBench. The face images can be downloaded from here. To access the corresponding annotations, participants are required to fill out a Google Form with a signed End-User License Agreement (EULA). The signed EULA must then be uploaded via the provided Google Form, along with the required participant information. Upon approval, the annotation download link will be sent to the requester. Note that the demographic labels will be sent to participants after 7/2/2025.
As demonstrated in our benchmark study (https://arxiv.org/pdf/2406.00783), we have 12 baseline models. However, we use Xception model as the baseline model for this competition. Preliminary results on the AI-Face dataset, including the performance of PG-FDD, are presented in Table 4 of paper (https://arxiv.org/pdf/2406.00783). You are free to use other baseline models. The checkpoints for 12 baseline models can be found in https://github.com/Purdue-M2/AI-Face-FairnessBench.
The baseline code is included in https://github.com/Purdue-M2/AI-Face-FairnessBench. The data-loading tools necessary for handling the datasets are provided within the same repository.
To better understand the training dataset and baseline model, please read the CVPR’25 paper https://arxiv.org/pdf/2406.00783.