Welcome to the IJCB 2017 competition on generalized face presentation attack detection in mobile authentication scenarios

It has been a while since the successful first and second competitions on counter measures to 2D facial spoofing attacks were organized. So it is time to evaluate the state of the art in face presentation attack detection (PAD) and to drive the research community to come up with novel ideas to tackle the problem of spoofing. The focus of this competition is on assessing the generalization abilities of different face PAD approaches in mobile authentication scenarios across different mobile devices, illumination conditions and presentation attack instruments (PAI). The competition will be a part of the International Joint Conference on Biometrics (IJCB 2017) that will be organized in Denver, Colorado, USA.

Motivation

The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face PAD methods performing robustly in practical mobile authentication scenarios. During the recent years, many face PAD methods have been proposed and astonishing results have been reported on the existing benchmark datasets (including the databases used in the previous competitions). Recent studies have shown, however, that most of these methods are not able to generalize well in more realistic application scenarios, thus face PAD is still an unsolved problem in unconstrained and unsupervised operating conditions. The aim of this competition is to compare and evaluate the generalization abilities of face PAD methods under some real-world variations, including acquisition device, PAI, and illumination.

Database description

The Oulu-NPU face anti-spoofing  database consists of 4950 real and fake face videos. These videos were recorded using the front cameras of six mobile devices (Samsung Galaxy S6 edge, HTC Desire EYE, MEIZU X5, ASUS Zenfone Selfie, Sony XPERIA C5 Ultra Dual and OPPO N3) in three sessions with different illumination conditions (Session 1, Session 2 and Session 3). The presentation attacks were created using different PAI, two printers (Printer 1 and Printer 2) and two display devices (Display 1 and Display 2). Figure 1 shows some samples of real and fake face images captured with the Samsung Galaxy S6 edge phone. The videos of the 55 subjects were divided into three subject-disjoint subsets for training, development and testing. The following table gives a detailed overview about the partition of the this database.

 
 
 
 
 
 Real  Print 1    
  Print 2
 Replay  1  Replay 2
            Figure 1:  Sample images of real and attack videos captured with Samsung Galaxy S6 edge phone.

   Users  Real access  Print attacks  Video attacks  Total
 Training  20    360  
  720
 720
 1800
 Development  15 270  540  540 1350
 Test  20 360  720  720 1800

Evaluation protocol

For the evaluation of the generalization capability of the face PAD methods, four protocols are used.

Protocol I:

The first protocol is designed to evaluate the generalization of the face PAD methods under different illumination conditions. As the database is recorded in three sessions with different illumination conditions, the train, development and evaluation sets are constructed using video recordings taken in different sessions.

Protocol II:

The second protocol is designed to evaluate the effect of the PAI variation on the performance of the face PAD methods because different PAI suffer from different artifacts. The effect of PAI variation is assessed by introducing previously unseen PAI in the test set.  

Protocol III:

One of the critical issues in  face anti-spoofing and  image classification in general is the generalization cross different camera devices. To study the effect of this variation, we design a Leave One Camera Out (LOCO) protocol. In each iteration, we use the real and the attack videos recorded with five camera devices to train and tune the countermeasure model. Then, we evaluate the performance using the videos recorded with the remaining camera.

Protocol IV:

In the last and most challenging protocol, all the three factors are considered simultaneously and the generalization of the face PAD methods are evaluated across previously unseen illumination conditions, PAIs and input sensors.

The following table gives a detailed information about the video recordings used in the train, development and test sets of each protocol. P refers to printer and D refers to display.

Submission of the results

Each competitors should submit two score files: one for the development set and one for the anonymized test set. The scores files should be organized in two columns as follows:

 File_name 1   score 1
 File_name 2   score 2
.
.
.
.
.
.
.
.
.
.
.
.
 File_name n  score n

Where File_name i is the name of the video file and score_i  is its corresponding score.

Evaluation

For the performance evaluation, we selected the recently standardized ISO/IEC 30107-3 metrics: Attack Presentation Classification Error Rate (APCER) and Bona Fide Presentation Classification Error Rate (BPCER). In principle, these two metrics correspond to the False acceptance Rate (FAR) and False Rejection Rate (FRR) commonly used in the PAD related literature. However, unlike the FAR and FRR, the APCER and the BPCER take the attack potential into account in terms of an attacker’s expertise, resources and motivation in the ”worst case scenario”. To be more specific, the APCER is computed separately for each PAI (e.g. print or display) and the overall PAD performance corresponds to the attack with highest APCER, i.e. the most successful PAI. This indicates how easy a biometric system is to fool on average by exploiting its vulnerability. To summarize the overall system performance in a single value, we use the Average Classification Error Rate (ACER) which is the average of the APCER and the BPCER.

A Matlab Implementations of a baseline face PAD method will be made available when the training and development sets are released.

How to participate

  1.  Register in the competition.
  2.  Download and sign the End User License Agreement (EULA).
  3.  Send the signed EULA to: zboulken(at)ee.oulu.fi .
  4.  Download the training and the development sets (It's now available).
  5.  Download the evaluation set (It's now available).
  6.  Send your results before 15/04/2017 (11:59 Pm PDT).

Important dates

  • 01/02/2017:  Release of the training and the development sets.
  • 01/04/2017:  Release of the evaluation set.
  • 15/04/2017:  Submission of the results.
  • 22/04/2017:  Submission of the algorithm descriptions.

Organizers

  • Zinelabidine Boulkenafet, University of Oulu, Finland, Email: zboulken(at)ee.oulu.fi
  • Jukka Komulainen, University of Oulu, Finland, Email: jukmaatt(at)ee.oulu.fi
  • Zahid Akhtar, INRS-EMT, University of Quebec, Canada, Email: zahid.eltc(at)gmail.com
  • Abdenour Hadid, University of Oulu, Finland, Email: hadid(at)ee.oulu.fi