Face Morphing Attack Detection Based on Privacy-aware Synthetic Training Data

Competition at International Joint Conference on Biometrics (IJCB) 2022:

IJCB-SYN-MAD-2022

A face morphing attack aims at creating images that automatically or by human experts are matched to faces of more than one individual. When morphing attacks are used in travel or identity documents, they allow multiple subjects to be verified to one document. This faulty subject link can lead to a wide range of illegal activities and lead to the development of morphing attack detection (MAD) algorithms. However, most of the existing MAD solutions are based on bona fide and attack images of real individuals and raise various privacy concerns as well as limit the amount of publicly available data for research.

This competition is the first to attract and present technical solutions that enhance the accuracy of morphing attack detection and it is the first competition on presentation attack detection to be restricted to synthetic training data.

All participants, who achieve competitive results, will be invited as co-authors for the summary paper of this competition. The competition summary paper will be submitted to the International Joint Conference on Biometrics (IJCB) 2022.

News & Updates

09. May 2022: Submission and Registration Deadline Extended! Following the extension of the IJCB paper deadline, we extend the submission deadline (17. May), and subsequently the registration deadline (15. May).

06. May 2022: We added an alternative version of the requirements.txt file for python 3.8. Both can be used for the competition.

03. May 2022: We added torchvision and torchaudio to the requirements.txt. We also clarify the used CUDA Version:


nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2021 NVIDIA Corporation

Built on Sun_Aug_15_21:14:11_PDT_2021

Cuda compilation tools, release 11.4, V11.4.120

Build cuda_11.4.r11.4/compiler.30300941_0

25. April 2022: Submission and Registration Deadline Extended! Following some of your requests, we extend the submission deadline (9. May), and subsequently the registration deadline (6.May).

25. April 2022: The requirements.txt is now correctly updated.

22. April 2022: After request, the insightface library (https://pypi.org/project/insightface/) has been added to the requirements.txt. We would like to emphasize that no pre-trained weights or models may be used, except for preprocessing (e.g. landmark detection).

19. April 2022: After request, it is now also allowed to use the MATLAB Runtime Version R2017b and R2022a. Additional instructions regarding retraining may be necessary to provide.

24. March 2022: The used GPUs during the re-training has been specified. Furthermore, two .txt files have been added containing the bounding boxes and facial landmark points for the training data. The same script will be used for the testing data. More details are presented below. (bbox_morph.txt, bbox_bf.txt)

Schedule

  • 16. March 2022 Call for participation, website release, and registration open.

  • 15. May 2022 Deadline for registration

  • 17. May 2022 Deadline for algorithm submission

  • 25. May 2022 Announcement of the results to the participants

  • 31. May 2022 Submission of competition summary paper to IJCB 2022

Competition Organizers

  • Marco Huber - Fraunhofer Institute for Computer Graphics Research IGD, Germany and TU Darmstadt, Germany

  • Fadi Boutros - Fraunhofer Institute for Computer Graphics Research IGD, Germany and TU Darmstadt, Germany

  • Kiran Raja - Norwegian University of Science and Technology, Norway

  • Raghavendra Ramachandra - Norwegian University of Science and Technology, Norway

  • Naser Damer - Fraunhofer Institute for Computer Graphics Research IGD, Germany and TU Darmstadt, Germany

Synthetic Training Dataset

The dataset contains 15 000 face morphing attacks and 25 000 bona fide images. All the morphed and bona fide images are based on synthetically generated face images from randomly drawn latent vectors. As the dataset is based on synthetic identities, it does not pose any privacy concerns [2]. Some examples of face morphing attacks and bona fide images are shown to the right.

The training data is exclusively limited to the provided synthetic data (any pre-trained weights are not allowed). The use of any other data is prohibited and will result in exclusion from the competition. To verify this, the training file must also be submitted as an executable file so that the organizers can retrain the solution.

The training dataset will be provided at registration.

Authentic Evaluation Dataset

The evaluation data will contain bona fide samples and will focus on measuring the generalizability of the submitted MAD solutions over multiple variation in the attacks.

The evaluation data will not be released to the participants and aims at reflecting realistic scenarios, e.g. identity and travel documents.


Registration

Registration for the competition can be done by e-mail. The training data will be provided at registration.

If you would like to register, please send an e-mail with subject "IJCB-SYN-MAD-2022" to marco.huber@igd.fraunhofer.de

The e-mail should also contain:

  1. team name

  2. list of team members and affiliations

  3. main contact person including contact details (e-mail, phone number, mailing address)

  4. Short biography of the main contact person

*To insure competition fairness, participating teams are not allowed to include team members affiliated with any of the organizing institutes (i.e. Fraunhofer IGD, TU Darmstadt, and NTNU).


Submission Procedure

The submission includes two files: the evaluation executable and the training executable. Both should be in Win32 or Linux console application.

Each team can submit up to two solutions. For the re-training of the models, we limit the training to 10 GPU hours per submission. (Used GPUs: GeForce RTX 2080 Ti)

We provide a requirements.txt (v3, 2022-05-03) file that should be used as an virtual environment. The same file will be used during the re-training of the models. An alternative requirements.txt(v4, 2022-05-06) for python version 3.8 is also available.

If other libraries are used or required, this must be approved in advance. In this case please write an email to marco.huber@igd.fraunhofer.de. In case of approval the requirements.txt will be extended.

Note: the requirements.txt file might get extended during the submission process! Current Version: v3, 2022-05-03

Update 1: The MATLAB Runtime Version R2017b and R2022a are also allowed to be used (Linux preferred). (2022-04-19)

Update 2: The insightface library has been added to the requirements.txt (2022-04-22)

Update 3: Torchvision and Torchaudio has been added to the requirements.txt (2022-05-03)

Landmarks

(New) We also provide two .txt files containing the bounding boxes and facial landmark points: bbox_morph.txt and bbox_bf.txt. Not all provided faces were recognized correctly, which is why some images do not show up in the .txt files. The files contain in each line a sample, e.g.:

./m15k_t/morphed_img000184_img282038.png 43 197 41 240 97 123 161 123 133 156 95 182 158 183

The values in each row are structured as follows:

/path/referene_1.jpg bb_x1 bb_x2 bb_y1 bb_y2 left_eye_x left_eye_y right_eye_x right_eye_y nose_x nose_y mouth_left_x mouth_left_y mouth_right_x mouth_right_y

were x1, x2, y1 and y2 are the bounding boxs points. And all others number refer to the respective coordinates of the face landmarks (left eye, right eye, nose, mouth left, mouth right).


1. Evaluation Executable

  • The evaluation executable should accept four parameters:

    1. the path to the model (--model)

    2. the path to the evaluation_list.txt (--evaluation)

    3. the path to the bounding_box.txt (--bbox)

    4. the output path of the scores (--output)

Example: eval_team23.exe --model "model/model_23" --evaluation "evaluation_list.txt" --bbox "bounding_box.txt" --output "/output/scores.txt"

  • The model should be loaded so that the organizers can load the model they have retrained (if necessary)

  • The evaluation_list.txt contains the paths to the testing images in each line

Example: ./test/xyz.png

./test/abc.png

  • The bounding_box.txt contains the image path and the bounding boxes and facial landmark points of each testing image in each line. The structure is as described above.

Example: ./test/xyz.png 43 197 41 240 97 123 161 123 133 156 95 182 158 183

./test/abc.png 43 199 38 253 95 126 161 125 136 166 95 189 158 188

  • The output path specifies the path of the scores file. The score file should in each line report the evaluated image and its score. The score file has to be saved in txt format.

Example: ./test/xyz.png 0.833

./test/abc.png 0.238


2. Training Executable

  • The training executable should accept four parameters:

  1. the batch size (--batch_size)

  2. the used GPUs (--gpus)

  3. the path to the training data (--train)

  4. the path to the bounding_box.txt (--bbox)

  5. the output paths of the trained model(s) (--output)

Example: train_team23.exe --batch_size 128 --gpus "0,1,2,3" --train "/data_train/" --bbox "/bounding_box.txt" --output "/model/team23/"

  • The structure of the training data/folder should not be altered.


Description of the Evaluation Criteria

The evaluation will be based on the Morphing Attack Detection performance following the ISO/IEC 30107-3 standard. It will be based on the Bone fide Presentation Classification Error Rate (BPCER) at fixed Attack Presentation Classification Error Rate (APCER). To cover different operation points, the BPCER at four different fixed APCER will be reported. The considered APCER values are 0.1%, 1.0%, 10% and 20%. The final ranking of the submissions will be based on the 20%. In case of equal performance, the lower APCER thresholds will be considered.

The organizers also provide a baseline approach against which the competitors can compete. The baseline approach utilizes the MixFaceNet-MAD[1,2] . More details on the baseline model (incl. pre-trained model) can be obtain from this paper/repository.

Contact

For any inquiries, please do not hesitate to contact Marco Huber (marco.huber@igd.fraunhofer.de) or any of the other organizers.

[1] Fadi Boutros, Naser Damer, Meiling Fang, Florian Kirchbuchner, Arjan Kuijper. MixFaceNets: Extremely Efficient Face Recognition Networks. IJCB 2021

[2] Naser Damer, César Augusto Fontanillo López, Meiling Fang, Noémie Spiller, Minh Vu Pham, Fadi Boutros. Privacy-friendly Synthetic Data for the Development of Face Morphing Attack Detectors. CVPR Workshops 2022