SUBMISSION AND RESULT GENERATION

Each Task includes both the random and the skilled impostor scenarios.

Each team can choose freely the Tasks to focus on based on their expertise in the corresponding modalities (or combination of modalities), in other words, they can make use of the information provided as they like. For example, if they develop a system based on background sensors, they could participate in all Tasks, or if the system developed is based on keystroke dynamics, they can participate only in Task 1. The participants are encouraged to exploit the multimodal nature of the available data by considering the fusion of the different biometric modalities available within each Task, although this is not mandatory.

2 out of the 4 genuine acquisition sessions will be used for user enrollment, while the remaining 2 for user verification, as well as the remaining 2 impostor sessions.

A valid submission for CodaLab is a zip-compressed file including the .txt files containing the score predictions made for each task you want to participate in (i.e., one .txt file per task). We expect to receive scores close to 1 for impostor comparisons and close to 0 for genuine comparisons.

The session comparison files (one .txt file per task) provided together with the evaluation dataset must be used to obtain the score predictions.

The labels of the Validation Set comparisons are provided together with the data. The labels of the Evaluation Set are not included in the database. It is necessary to submit the scores on CodaLab to get the evaluation results.

Note that even if you upload results from multiple submissions on to the leaderboard, only your latest submission is displayed on the leaderboard.

Submitted .txt files included in a zip-compressed file must have the following nomenclature:

  • Task 1: “task1_predictions.txt”

  • Task 2: “task2_predictions.txt”

  • Task 3: “task3_predictions.txt”

  • Task 4: “task4_predictions.txt”

In case you want to participate only in one task (e.g., task 1), submit the zip-compressed file including only the .txt file associated to that task (e.g., task1_predictions.txt). The result of that specific task will be updated in the leaderboard whereas the value 0.0 will appear in the other tasks, indicating that no results have been submitted.

Finally, in each prediction .txt file, we expect to have one prediction per row (column format) and with the same length as the number of comparisons included in the session comparison files provided (one .txt per task).

We hope that this protocol will motivate the participation of both Academic and Industry as the participants will not have to share their systems.