Guidelines to submit your methods

The instructions on how your program should output results might be revised slightly in this second edition for the final submission but will remain inaltered for the intermediate submissions


Submissions should be made to:  a.f.p.sequeira@reading.ac.uk

Algorithm submission

1.     Each team is invited to submit 2 executable programs:

"irisenroll_TeamName.exe" / “perienroll_TeamName.exe”  and "irismatch_TeamName.exe" / “perimatch_TeamName.exe(for iris recognition  and  for periocular recognition, respectively ) in the form of Windows console applications.

A brief description of the submitted algorithm is required for the competition report.


2.     The enrolment program

"irisenroll_TeamName.exe"/“perienroll_TeamName.exe”

is expected to read 1 image (near infrared or visible range) and produce the template of the image.


3.     The template matching program

"irismatch_TeamName.exe"/"perimatch_TeamName.exe"

reads 2 templates and then outputs a matching score between 0 and 1, where the score 0 means no similarity and 1 is the maximum similarity, respectively.


4.     Command-line instructions for the executables:

4.1.  > irisenroll_TeamName imagefile templatefile outputfile

        > perienroll_TeamName imagefile templatefile outputfile

where "imagefile" denotes the pathname of an iris image (e.g. "u000_nir_000.bmp"), "templatefile" denotes the pathname of the enrolled feature template (e.g. " u000_nir_000.dat"), and "outputfile" denotes the pathname of a text file (e.g. " u000_nir_000_enrol.txt"). If the content of "u000_nir_000_enrol.txt" is "1", it denotes a successful enrolment. If the content of "u000_nir_000_enrol.txt" is "0", it denotes a failure enrolment.

4.2.  > irismatch_TeamName templatefile1 templatefile2 outputfile  

        > perimatch_TeamName templatefile1 templatefile2 outputfile

where "templatefile1" and "templatefile2" denote the pathname of two iris feature templates, and "outputfile”. The “outputfile” denotes the pathname of a text indicating the matching score ranging from 0 to 1. Matching score is a floating point value ranging from 0 to 1 which indicates the similarity between the template1 and the template2. The score 0 means no similarity and 1 is the maximum similarity. If the algorithm fails to match, the output should be "-1".

 

Algorithm evaluation

1. The evaluation is made using the error metrics GFAR and GFRR that consider both the failure-to-enroll (FTE) and the failure-to-acquire (FTA) rates  (compliant to the international standardization documents (namely, ISO/IEC 19795-1:2006).)

    Generalized false accept rate and generalized false reject rate:

    GFAR = FMR * (1 – FTA) * (1 – FTE)

    GFRR = FTE + (1 – FTE) * FTA + (1 – FTE) * (1 – FTA) * FNMR

    (where FTA is the failure to acquire rate and FTE is the failure to enrol rate).

   

The following performance indicators will be taken into account for algorithm comparison:

·       Generalized False Acceptance Rate (GFAR) and Generalized False Reject Rate (GFRR)

·       Failure-to-Enroll Rate (FTE) and Failure-to-Acquire Rate (FTA) 

·       Equal Error Rate (EER)

·       Average enrollment processing time and average template matching processing time

·       Maximum memory allocated for enrollment and matching

·       And average and maximum template size

2.     All possible cross-sensor intra-class comparisons are implemented to evaluate the false non-match rate (FNMR). regarding the inter-class comparisons, 3 samples are selected from each iris class for each sensor to evaluate the false match rate (FMR).

3.    Popular performance metrics of iris recognition such as GFAR, GFRR, EER and ROC and DET curves will be reported and the "generalized F2" (GFRR@ GFAR=0.01) will be used as the performance indicator to rank the submitted algorithms.

4.     During the competition, there will be two types of submissions: the intermediate submissions and the final submission.

·       The intermediate submissions are meant to stimulate interaction with the event, to give feedback to the participants about their performance compared with the other participants and to allow the refinement of the algorithms (and are limited to a number of 2). A ranked list will be published illustrating the relative position of methods.

·       The final submission will be used to obtain the final results. With the final submission the participants must send a brief of the method that will be included in the publication related to the competition with the due citation to participants.

5.     The ranked list of intermediate results will be updated after the intermediate submissions.

6.     The evaluation of the proposed algorithms, in each intermediate submission, will be performed on a subset of the test set. The evaluation of the final submission will be made on the whole test set.

7.     The evaluation of each team for the final results will be done using the last algorithm submitted unless it is required by the participant to do otherwise.

8.     The participants must send a brief of the method that will be included in the publication related to the competition. The brief must include an explanation of the method and the due citation to participants. This must be sent so that the team will be included in the final results publication.