ICPR 2022 - Workshop Fairness in
August 21, 2022
Workshop on Fairness in Biometric Systems
In recent years, biometric systems spread worldwide and are increasingly involved in critical decision-making processes, such as in finance, public security, and forensics. Despite their growing effect on everybody’s daily life, many biometric solutions perform strongly different on different groups of individuals as previous works have shown. Consequently, the recognition performances of these systems are strongly dependent on demographic and non-demographic attributes of their users. This results in discriminatory and unfair treatment of the user of these systems.
However, several political regulations point out the importance of the right to non-discrimination. These include Article 14 of the European Convention of Human Rights, Article 7 of the Universal Declaration of Human Rights, and Article 71 of the General Data Protection Regulation (GDPR). These political efforts show the strong need for analyzing and mitigating these equability concerns in biometric systems.
Current works on this topic are focused on demographic-fairness in face recognition systems. However, since there is a growing effect on everybody’s daily life and an increased social interest in this topic, research on fairness in biometric solutions is urgently needed.
Developing and analyzing biometric datasets.
Proposing metric related to equability in biometrics.
Demographic and non-demographic factors in biometric systems.
Investigating and mitigating equability concerns in biometric algorithms including
Identity verification and identification
Soft-biometric attribute estimation
Presentation attack detection
Biometric image generation
Topics (not limited to):
Datasets designed for the evaluation and development of
fair biometric solutions.
Demographic and non-demographic fairness concerns
Differential performance and outcome in biometric systems.
Estimation of equability in biometric systems.
Explainability and transparency in biometrics.
Explainability-aware and equability–mitigating
Evaluating and mitigating equability-issues in biometric solutions, including identity recognition, soft-biometric attribute estimation, presentation attack detection, and quality assessment.
This is a workshop for the 26th International Conference on Pattern Recognition 2022 (ICPR 2022). Accepted articles will appear in the proceedings of ICPR workshops.
Yevgeniy B. Sirotin (Maryland Test Facility / IDSL / SAIC )
Demographic Differentials in Face Recognition Systems: Applied Research Challenges
Widespread deployments of face recognition systems have raised public concerns regarding the fairness of these technologies. Some have claimed that these systems are biased and do not work for individuals belonging to specific groups protected by United States and European Union laws. Others have claimed the "bias" is a solved problem in leading commercial systems. This presentation will present evidence showing that neither of these positions is entirely correct. Using insight gained over eight years of large scale scenario testing by the Identity and Data Sciences Lab at the Maryland Test Facility, we'll show how face recognition can perform well for people of different race, gender, and skin color under some conditions. However, when conditions change, demographic effects in commercial face recognition can impact fairness. The talk will challenge common presumed fairness goals for face recognition systems and identify four applied research challenges (sample acquisition, demographic labeling, measuring differentials, and algorithm development) that should be addressed in order to improve the fairness of face recognition in applied settings.
Ignacio Serna (Autonomous University of Madrid / California Institute of Technology)
Where are we on measuring bias?
A topic of substantial interest in the field of ML and specifically in face recognition is bias. Deep learning has boosted great improvements in face recognition systems, but there is a lack of understanding regarding how biases affects them beyond their performance. We'll explore how biases are encoded in deep networks, the impact on their activations, and how it can be detected. We will see it through a toy example using the MNIST database and through an application in face biometrics. We will also look at future challenges.