EVALUATION CRITERIA

A standard and popular metric in the field such as the Area Under the Curve (AUC) value will be used to evaluate the performance of the systems proposed. At the end of the competition, there will be one winner per Task. The participants will provide their obtained results which emcompass both impostor cases (random and skilled). Different AUC values for each of the impostor scenarios will be computed (“AUC Random Case”, “AUC Skilled Case”, “AUC Mixed Case”), but the final ranking will be based on “AUC Mixed Case” only, which includes both types of impostor distributions. We expect to receive scores close to 1 for impostor comparisons and close to 0 for genuine comparisons.