Masked Face Recognition Competition

IJCB-MFR-2021

Given the current COVID-19 pandemic, it is essential to enable contactless and smooth-running operations, especially in contact-sensitive facilities like airports. Face recognition has been preferred as a contactless means of verifying identities. However, wearing masks is now essential to prevent the spread of contagious diseases and has been currently forced in public places in many countries. The performance, and thus the trust on contactless identity verification through face recognition can be impacted by the presence of a mask. The effect of wearing a mask on face recognition in a collaborative environment is currently a sensitive issue.

This competition is the first to attract and present technical solutions that enhance the accuracy of masked face recognition on real face masks and in a collaborative face verification scenario.

All participants, who achieve competitive results, will be invited as co-authors for a summary paper of MFR. The paper will be submitted to IJCB- 2021.

Schedule:

  • 15, February - Call for participation, website release, and registration open

  • 31, April 5, May - Deadline for algorithm submission

  • 15, May 25, May - Announcement of results to participants

  • 31, May - Submit competition paper to IJCB 2021


COMPETITION RESULTS AND PAPER are RELEASED under: https://arxiv.org/abs/2106.15288

Many thanks to all the participants for the innovative solutions!

Congrats to the winners!

Database

Evaluation dataset

The evaluation data (MFR-Data) simulates a collaborative, yet varying scenario. Such as the situation in automatic border control gates or unlocking personal devices with face recognition, where the mask, illumination, and background can change. The database is collected at the hosting institute and not available publicly. The data is collected on three different, not necessarily consecutive days. We consider each of these days as one session. On each day, the participant has collected three videos, each of a minimum length of 5 seconds (used as single image frames). The first session is considered a reference session, while the other two were considered as probe sessions. Each day contained three types of captures, no mask, masked with natural illumination, masked with additional illumination. The database participants were asked to remove eyeglasses only when the frame is considered very thick. No other restrictions were imposed, such as background or mask type and its consistency over days, to simulate realistic scenarios. An initial version of the database is described under this link.

Training dataset

Participants are free to use any data for training. Examples of the training database are VGGFace2, MS-Celeb-1M, CASIA-WebFace. The participants may use the synthetically generated masked face for drawing a masked face on the face image, in case they deem this essential for their training process. The mask generation method is described in the NIST report and the implementation of mask generation is available under this link.

An example of unmaksed face image

An example of masked face image

Competition organizers

  • Fadi Boutros - Fraunhofer Institute for Computer Graphics Research IGD, German and TU Darmstadt, Germany.

  • Dr. Naser Damer - Fraunhofer Institute for Computer Graphics Research IGD, Germany.

  • Dr. Kiran Raja - Norwegian University of Science and Technology, Norway.

  • Prof. Raghavendra Ramachandra - Norwegian University of Science and Technology, Norway.

  • Prof. Dr. Arjan Kuijper - Fraunhofer Institute for Computer Graphics Research IGD, German and TU Darmstadt, Germany.

Details on the experimental protocol and result generation/submission procedure

  1. The participants will provide executables of their relevant algorithms. The submitted algorithm should be in Win32 or Linux console application

  2. The executable file should accept three parameters: evaluation_list.txt, landmarks.txt, and output_path. For example,

    • team1.exe evaluation_list.txt landmarks.txt /output_dir/scores.txt

  3. The evaluation_list.txt text file contains pairs of the path to the reference and probe images with mask labels structured as follows:

    • /path/referene_1.jpg /path/probe_1.jpg label_r1 label_p1

    • where "/path/referene_1.jpg" and "/path/probe_1.jpg" are the path for reference and probe images, respectively. "label_r1" and "label_p1" are the mask labels for reference and probe images i.e. 0 is unmasked and 1 is masked.

    • For example:

    • /home/data/Reference/ID1540_d1_i1_30.jpg /home/data/Probe/ID1540_d2_i1_30.jpg 0 1

  4. The landmark facial points and bounding box for each image are available in the landmarks.txt text file. The file contains N rows. The first value in each row of the landmarks.txt text is the image identifier (full path) followed by the bounding box and the facial landmark point. The values are separated by a single space. The values in each row are structured as follows:

    • /path/referene_1.jpg bb_x1 bb_y1 bb_x2 bb_y2 left_eye_x left_eye_y right_eye_x right_eye_y nose_x nose_y mouth_left_x mouth_left_y mouth_right_x mouth_right_y

    • where bb_x1 and bb_y1 are (x,y) coordinates of the bounding box first point (top-left) and bb_x2 and bb_y2 are the (x,y) coordinates of the bounding box point opposite to top-left point.

    • left_eye_x and left_eye_y are the (x,y) coordinates of the left eye landmark point.

    • right_eye_x and right_eye_y are the (x,y) coordinates of the right eye landmark point.

    • nose_x and nose_y are the (x,y) coordinates of the nose landmark point.

    • mouth_left_x and mouth_left_y are the (x,y) coordinates of the mouse landmark point (left).

    • mouth_right_x and mouth_right_y are the (x,y) coordinates of the mouse landmark point (right).

    • The corresponding landmark points and bounding boxes for the previous pairs are:

    • /home/data/Reference/ID1540_d2_i1_30.jpg 428 172 611 396 477 254 571 253 520 286 484 345 556 344

    • /home/data/Probe/ID1540_d2_i2_30.jpg 407 162 619 418 471 268 557 259 531 307 483 353 564 355

  5. These examples are visualized in the next Figure 1 and Figure 2.

  6. The output of the participant executable script is a text file saved under the path "/output_dir/scores.txt". The scores.txt file should contain comparison scores for each pair in evaluation_list.txt.

  7. The main competition will be based on verification between non-masked references and masked probes. Additionally, an evaluation will be made and reported on verification performance between masked references and masked probes, however, this will not be considered to rank the participants in the competition. The participants will be requested to submit answers to a set of questions regarding their submitted algorithms along with the submission.

  8. Each team can submit up to 2 different solutions. If two models are submitted (per-team), the solution that achieves the best performance will be considered.

Exmple of unmasked image with facial landmark points and bounding box

Figure 1: An exmple of unmasked image with facial landmark points and bounding box

Figure 2: An exmple of masked image with facial landmark points and bounding box

Description of the evaluation criteria (performance metrics) and available baseline implementations/ code

The baseline performance evaluation will be based on the open-source implementation of the ArcFace model. The considered model architecture is LResNet100E-IR trained on ms1m-refine-v2 database with ArcFace loss function. The pre-trained model is available on the official ArcFace Github repository.

The algorithms evaluation will be based on both, the verification performance and the compactness of the used mode/models. The verification evaluation will be based on the verification performance of masked vs. not-masked verification pairs, as this is the common scenario, where the reference is not-masked, while the probe is masked, e.g. in entry to a secure access area. However, the performance of masked vs. masked verification pairs will also be reported in the competition paper.

The verification performance will be evaluated and reported as the false non-match rate (FNMR) at different operation points FMR100, FMR1000, and ZeroFMR, which are the lowest FNMR for a false match rate (FMR) <1.0%, <0.1%, and <0%, respectively. The verification performance evaluation of the submitted algorithms will be based on FMR100. If different submitted algorithms achieved the same FMR100, then FMR1000 will be considered, then the ranking will move to the separability between genuine and imposter comparisons measured by the Fisher Discriminant Ratio (FDR). All submitted algorithms that achieve FMR100 higher than the baseline verification performance (ArcFace) will be considered as a competitive solution.


To consider the deployability of the participating solutions, we will also consider the compactness of the model (represented by the number of trainable parameters) in the final ranking. The participants can be asked to report the number of trainable parameters, and can be asked to provide their solutions to validate this number.


The final teams ranking will be based on a weighted borda count, where the participants will be ranked by (a) the verification metric as mentioned above (rank-a), and (b) by the number of trainable parameters in their model/models (rank-b). The rank-a will have 75% weight and rank-b will have 25% weight. For example, if Team X was ranked first out of 10 participants in verification performance rank-a (rank-a borda count =9) and third out of 10 participants in model compactness rank-b (rank-b borda count = 7). This corresponds to borda count = number of participants – rank. Then the weighted borda count = 0.75x9+0.25x7= 8.5. Therefore, the final score of Team X is 8.5 and a higher score indicates a better solution. The participants may be asked to submit their model code to validate the number of trainable parameters of the submitted model.

Registration

Registration for the competition can be done by email. If you would like to register, send please email with a subject line "IJCB-MFR-2021" to fadi.boutros@igd.fraunhofer.de

Email should contain:

  1. Team members and affiliations.

  2. Main contact person and contact details (email, phone number, mailing address)

  3. Short biography of the main contact person.

Contact

For any inquiries, please do not hesitate to contact Fadi Boutros (fadi.boutros@igd.fraunhofer.de) or any of the organizers.