For this second version, with the support of Facephi company, HDA, and Fraunhopher, we will use a custom platform based on or similar to EvalAI to handle submissions and evaluations online directly for each participant team.
All the teams must submit a docker image with an API that receives an image file as input in JPEG, PNG, or TIFF format and returns a continuous float value in the range [0.0, 1.0], where 0.0 represents total confidence that the image is a Presentation Attack and 1.0 represents total confidence that the image is Bona fide.
Each team's docker image implementation must include all specific processes, such as segmentation and alignment.
We will provide a GitHub repository with an example and instructions on building the image and preparing it for submission. [Link Github].
Each submission will be evaluated in an offline environment, so the docker image should include all necessary packages and data (weights, configuration files, etc.) to work without external connectivity.
All participants need to submit the registration form. Once accepted, we will provide a user for the evaluation platform. [Link Platform].
For the performance evaluation, we will follow the metrics recommended by ISO/IEC 30107-3, which will be used to assess all submissions: the Equal Error Rate, Bonafide Presentation Classification Error Rate (BPCER), and Attack Presentation Classification Error Rate (APCER).
We will report the three working points recommended by ISO/IEC 30107-3, which are the BPCER value when the APCER score is fixed at 10% (BPCER10), 5% (BPCER20), and 1% (BPCER100).
To determine the winning team, an average ranking will be determined as follows:
AVRank= (BPCER10*0.2 + BPCER20*0.3 + BPCER100*0.5)
The team with the lowest AVRank will win the competition.
The 3 top-ranked teams will be invited to participate as the co-authors of the competition report paper*.
(*)This decision can be extended in case of a large number of competitive participants.
Evaluation platform:
The evaluation platform is provided by Facephi company only for the competition's purpose.
In the competition, the test set will not be pre-processed it is up to each team to implement the pre-process in their docker image.
The test will be performed in stand-alone conditions, which means without an internet connection.