We use CodaBench for results submission. Specifically, we will open two CodaBench links for track 1 and track 2, respectively. Participants are expected to compute and submit text files including the id and confidence scores. It should be noted that during the evaluation phase, each team is allowed to submit at most three results per track. This limit is imposed to ensure fairness and prevent excessive tuning based on test set feedback. After the competition concludes, we will release the metadata of the test sets to promote reproducibility, and future research in environmental sound deepfake detection. (links please refer to the first page)
Finally, each team should submit:
Inference results: Via Codabench.
Configuration files: We kindly request the participants to submit configuration file(s) for their final system(s) in an expected template to the challenge organizer (hanyin@kaist.ac.kr). One configuration file for one submitted system. For each track, each team can submit at most 3 systems.
For example, if you have submitted 2 systems for track 1 and 2 systems for track 2, then you should submit 4 configuration files: Track1_system1.yaml, Track1_system2.yaml, Track2_system1.yaml, and Track2_system2.yaml.
The template of the configuration file will be shared in Challenge Google Group.
Participants are not allowed to use the evaluation set or test set for training purposes.
The use of publicly available pre-trained models is allowed. However, the used models need to be clearly stated in the team's technical report.
Participants are encouraged to disclose the model parameters and training devices in their technical reports.
Data augmentation is allowed. For example, open-source audio generation models can be used to synthesize fake data for training. However, the use of any internal generation models for any purposes is strictly not allowed.
The use of generators considered in evaluation and test sets (i.e., G05~G07) for training purposes is strictly not allowed.
Participants must ensure that all training data originates from publicly available and ethically sourced datasets. Any use of proprietary, confidential, or unreleased data not explicitly permitted by the challenge organizers is strictly prohibited.
Launch of the challenge: September 01, 2025
Registration deadline: October 15, 2025
Progress phase: September 01, 2025 ~ November 16, 2025
Test sets release: November 17, 2025
Ranking phase: November 17, 2025 ~ November 26, 2025
Leaderboard release: November 27, 2025
Two-page paper due (by invitation only): December 07, 2025
Two-page paper acceptance notification: January 11, 2026
Camera-ready 2-page papers due: January 18, 2026