Efficient Face Recognition Competition

at IJCB 2023

Facial recognition is currently used in our daily lives and in many applications. This includes authorizing financial transfers with our personal devices, logging into different devices and accounts, performing an automatic border control, or even personalizing our driving experience. Developing efficient biometric solutions is essential to minimize the required computational costs, especially when deployed on embedded and low-end devices. However, most of the current face recognition systems use deep neural networks with extreme number of trainable parameters. These systems are limited in their application by the high computational effort required, whether because of a shared resource environment of deployment devices, in which computing capacity and battery are shared with other smart applications, or because the requirement of instance decisions by the user. Therefore, there is a great need for face recognition systems that offer high performance at the lowest possible computational cost.

The aim of EFaR-2023 competition is to attract and showcase the latest innovations in efficient and lightweight face recognition and to motivate the development of novel techniques. 

The final competition paper will be submitted to the IEEE/IAPR International Joint Conference on Biometrics 2023.

Schedule

Registration

To register for participation please send an email with the title "IJCB-2023-EFaR" to jan.niklas.kolf@igd.fraunhofer.de.
Your registration should contain the following:

Competition Organizers

Jan Niklas Kolf

Fraunhofer IGD, Germany
TU Darmstadt, Germany

Fadi Boutros

Fraunhofer IGD, Germany

Naser Damer

Fraunhofer IGD, Germany
TU Darmstadt, Germany

Competition Details

Training Data

Participants are free to choose their training data. However, these databases must be publicly accessible and the authors must have a license to use these databases (when required by the data creators). The participants takes full responsibility of insuring the proper legal use of the data. Examples of that is WebFace42M.

It is not allowed to use common evaluation benchmarks for training: 

Testing Data

All models will be evaluated by the competition organizers.

Image alignment and preprocessing:  

The organizers will provide two options for the participant: 

1) Option 1: By default, all evaluation images are aligned and cropped by the organizers. 

This includes the following: 

The fixed landmark points have the following values: 

left eye point (x,y): (38.2946, 51.6963)

Right eye point (x,y): (73.5318, 51.5014)

Nose  point (x,y): (56.0252, 71.7366)

Left mouth point (x,y): (41.5493, 92.3655)

  Right mouth point (x,y): (70.7299, 92.2041)


2) Option 2: Participants can perform the alignment using their own solution. In this case, the organizers will provide raw images (without alignment and cropping), a bounding box, and five landmark points for each of the input images. 

If the participants opt to use this solution, then they should provide a separate script/executable to perform the alignment. 

This script should take two parameters: 1) input text file that contains in each line: image path,  bounding box, and five landmark points 2) output saving directory.  

Example of the required script:

team1_alignment.sh images_with_landmarks.txt /data/db/team1/

All preprocessed images should be saved in the output saving directory with the exact input image names and file extension 

The bounding box is defined with a top left corner point (x,y), width and height.

Five landmark points include: left eye point (x,y), right eye point (x,y), nose point (x,y), mouth left point (x,y) and mouth right point (x,y)

The input image file has the following format (single space is used as a separator):

/input/image1.png  boundingbox_corner_x  boundingbox_corner_y boundingbox_width boundingbox_height  left_eye_x left_eye_y  right_eye_x right_eye_y  nose_x nose_y mouth_left_x mouth_left_y mouth_right_x mouth_right_y

Template Extraction:

The submitted solution should read a  list of images from a local folder and output template i.e. feature representation for each input image.  The submitted solution should take two parameters: 1. image_list.txt to read the image from a local folder  2. the output folder to save the extracted feature representations.  

For example:

team1.sh  -image_list.txt -/data/EFAR/output/fx/team1

The following instructions should be strictly followed:

Template Matching
The competition organizers will run the pairwise comparisons using the previously extracted features and calculate the required performance metrics. As a similarity metric cosine similarity is used. If any participant wishes to use any other metric, we ask them to provide an extra Python script for the calculation.


Evaluation Criteria and Ranking

Since the focus of the competition is on efficient face recognition models, the main criteria by which the submissions are ranked are Accuracy, FLOPs and model size in MB. The models will be ranked by Borda Count, where 70% of the weighting is attributed to Accuracy, 15% towards FLOPs and 15% to model size in MB. Pruning, Quantization or any other model compression techniques for reducing the model size and FLOPs are allowed.

There are two categories of models, each of which gets its own ranking:

Models with more than 5 Mio parameters are not considered efficient enough for this competition.

Sample code for calculating the FLOPs, parameter number and model size is available for Pytorch in this respository:

https://github.com/jankolf/pytorch_flops

The top-3 ranked teams in each category will be invited as the co-authors of the competition report paper (this will be extended in case of a large number of competitive participants). 

Submission

Each team may submit two entries per category. The best model will be included in the teams ranking but both will be evaluated and reported. The models must be executable independently, without further setup and without internet access. If Python is used, one option to create an executable file is PyInstaller. The executable or .sh file should accept as arguments a landmarks file and the output directory, as example: team1_alignment.sh images_with_landmarks.txt /data/db/team1/


The executable file must be runnable on Ubuntu 22.04.

The teams can upload their data as a ZIP file to a cloud provider of their choice, which must be accessible in Germany without account registration.

The deadline for submitting models is April 30th.
Also each teams need to submit information about the used training data including their license to use the data, training protocol and setup until April 30th. The top performing models will be retrained and reevaluated by the competition organizers.

Contact

In case of questions or clarifications please contact

Jan Niklas Kolf
jan.niklas.kolf@igd.fraunhofer.de

or any of the organizers.