This competition is based on two different datasets: VAMONOT Real and VAMONOT Synthetic.
The competition is hosted on Codabench, while in the following, you can find some useful information.
For each subject in the VAMONOT Synth dataset, three sequences are available
Both datasets have already collected by the organizers specifically for the V-MAD task, and never used in the literature.
Both datasets are ready with annotations to be used for a supervised evaluation.
Only a minor portion of the two datasets will be publicly released, then participants should use (any) public datasets to train their algorithms. We encourage participants to use data from the FEI Morph and ChiMo datasets.
VAMONOT Real:
Each subject is acquired in two different indoor scenarios
For each scenario, the subject is acquired in two different settings:
Gaze into the camera
Different head poses and gaze not in the camera
You can download the training split of this dataset here.
VAMONOT Synthetic
Created through an AI Video Generator
Starting identities are synthetic and are taken from the ONOT dataset
Each subject is acquired in two different scenarios:
Gaze into the camera
Different head poses and gaze not in the camera
You can download the training split of this dataset here.
Example of the VAMONOT Real dataset
Participants are requested to send their trained model through the Codabench platform. Models will be tested on sequestered data, i.e., data not available and visible at training time. Submissions must include both the model's code and its weights. The submission platform will execute inference on the sequestered test set and present the corresponding scores on the leaderboard.
Bona fide Sample Classification Error Rate (BSCER): the proportion of genuine (bona fide) images incorrectly rejected as morphed
Morphing Attack Classification Error Rate (MACER): the proportion of morphed images that are incorrectly accepted as bona fide.
From these two metrics, we also compute the Equal Error Rate (EER), defined as the operating point at which BSCER and MACER are equal, yielding a single scalar value that provides a rough estimate of the system's overall performance.
More in detail, for each submission, we determine, in addition to the EER, BSCER values at several fixed MACER operating points. These correspond to the minimum BSCER achievable under the constraints MACER and represent standard operating points for Morphing Attack Detection systems.
Finally, to provide a more balanced result across all metrics and to establish the final ranking of submissions in the leaderboard, we define a custom aggregate measure, the Weighted Average Error across Datasets (WAED). This metric combines the above-mentioned performance indicators for both datasets into a single scalar value.
The best teams of each sub-track will be invited to contribute as co-authors in the summary paper of the V-MAD Competition.
This paper will be published in the proceedings of the IJCB 2026 conference.
There are no limits on the maximum number of submissions; however, only the last will count towards the final ranking.
Submissions must be fully reproducible: the full code used to generate the proposed solution for each track must be included in the final submission package, along with clear instructions on how to run it. Note that it is not required to submit the code in all other submission packages.
The proposed algorithms must be fully automatic without any human intervention
Submissions that are not reproducible or do not follow the required structure are disqualified and will not be considered for the final evaluation.
A team is composed of at most 5 participants.
The team’s participants may change during the whole submission period; after that, no more substitutions can take place.
Other rules are available on the Codabench website.