In-person: Tower 1, Presidency University, Newtown Campus
Online: https://meet.google.com/str-yutx-rqb
In recent years, biometric systems spread worldwide and are increasingly involved in critical decision-making processes, such as in finance, public security, and forensics. Despite their growing effect on everybody’s daily life, many biometric solutions perform strongly different on different groups of individuals as previous works have shown. Consequently, the recognition performances of these systems are strongly dependent on demographic and non-demographic attributes of their users. This results in discriminatory and unfair treatment of the user of these systems.
However, several political regulations point out the importance of the right to non-discrimination. These include Article 14 of the European Convention of Human Rights, Article 7 of the Universal Declaration of Human Rights, and Article 71 of the General Data Protection Regulation (GDPR). These political efforts show the strong need for analyzing and mitigating these equability concerns in biometric systems.
Current works on this topic are focused on demographic-fairness in face recognition systems. However, since there is a growing effect on everybody’s daily life and an increased social interest in this topic, research on fairness in biometric solutions is urgently needed.
This includes
Developing and analyzing biometric datasets.
Proposing metric related to equability in biometrics.
Demographic and non-demographic factors in biometric systems.
Investigating and mitigating equability concerns in biometric algorithms including
Identity verification and identification
Soft-biometric attribute estimation
Presentation attack detection
Template protection
Biometric image generation
Quality assessment
Topics (not limited to):
Datasets designed for the evaluation and development of
fair biometric solutions.
Demographic and non-demographic fairness concerns
Differential performance and outcome in biometric systems.
Estimation of equability in biometric systems.
Explainability and transparency in biometrics.
Explainability-aware and equability–mitigating
biometric algorithms
Evaluating and mitigating equability-issues in biometric solutions, including identity recognition, soft-biometric attribute estimation, presentation attack detection, and quality assessment.
This is a workshop for the 27th International Conference on Pattern Recognition 2024 (ICPR 2024). Accepted articles will appear in the proceedings of ICPR workshops.
Title: Unravelling the Layers: Understanding the Impact of Bias on Machine Learning at Multiple Levels
Abstract: As machine learning (ML) systems become increasingly embedded in critical decision-making processes, the imperative to identify and mitigate bias has never been more urgent. Bias can infiltrate ML models at multiple stages—ranging from data collection and preprocessing to model training, evaluation, and deployment—compromising fairness, accuracy, and trustworthiness. This keynote delves into the multifaceted nature of bias within the ML pipeline, drawing upon extensive research and practical experiences in the field. We begin by examining common sources of bias in datasets, such as sampling bias, measurement errors, and historical prejudices, and discuss strategies for detection and correction during data preprocessing. The talk then explores algorithmic biases that arise from model selection and training procedures, highlighting how certain algorithms may inadvertently perpetuate existing disparities. We also address evaluation bias, emphasizing the role of performance metrics and validation techniques in providing a holistic assessment of model fairness. Moving beyond the traditional input-outcome analysis, we consider the often-overlooked biases that emerge during the training process of popular neural networks. Through case studies and recent advancements in bias detection methodologies, we illustrate practical approaches to identifying and mitigating bias at each stage of the ML process.
Bio: Aythami Morales Moreno received his M.Sc. degree in Electrical Engineering in 2006 from Universidad de Las Palmas de Gran Canaria. He received his Ph.D degree in Artificial Intelligence from La Universidad de Las Palmas de Gran Canaria in 2011. He performs his research works in the BiDA Lab – Biometric and Data Pattern Analytics Laboratory at Universidad Autónoma de Madrid, where he is currently an Associate Professor (CAM Lecturer Excellence Program). He is member of the ELLIS Society (European Laboratory for Learning and Intelligent Systems). Included in the Stanford/Elsevier Top 2% Scientists List 2024.
Title: Face Recognition Gender Bias: From Causes to Mitigation
Abstract: Face recognition technology has recently become a topic of controversy due to concerns about possible bias across demographics. However, media coverage has not always been concerned with accurately understanding face recognition technology and the underlying causes for the observed bias. In this talk, we present an investigation on face recognition bias across genders where our results show a consensus higher false match rate (FMR) and higher false non-match rate (FNMR) for women (“gender gap”), agreeing with the literature. We also investigate various speculated causes for the gender gap and find face visibility to be the main cause for females’ higher FNMR. We speculate that differences in face shape similarity are the main cause for females’ higher FMR. We also show the results of extensive experiments on the correlation of gender balance in training datasets and accuracy on test sets. Moreover, we use face segmentation methods to show that face regions have different effects across demographics, suggesting that matches should weigh face regions differently. Finally, we investigate the effect that the separation margin during model training has on gender bias. We train models with different margins for each gender and analyze the effect that this has on training and testing accuracy. When margins are the same for both genders, we show that the gender gap is present in both training and test datasets. However, when margins are different during training, with females given a larger margin than males, the gender gap in test accuracy is reduced.
Bio: Vitor is a Research Scientist at Meta working with Responsible AI. He received his PhD from University at Notre Dame, where he worked with Dr. Kevin W. Bowyer at the Computer Vision Research Lab (CVRL). His research includes responsible AI, computer vision, machine learning and biometrics. His PhD research focused on face recognition, where the main goal was to understand and improve the accuracy across different demographic groups in deep learning methods. His previous research at the IMAGO UFPR lab focused on detecting facial expressions using Action Units with varying head poses.
Paderborn University, Germany
Norwegian University of Science and Technology, Norway
University of Applied Sciences Darmstadt, Germany
BITS Pilani Hyderabad, India
INESC TEC, Portugal
INRIA, France
National Institute of Technology Rourkela, India
Norwegian University of Science and Technology, Norway
Fraunhofer IGD, Germany
Technical program committee
Abu Sufian (CNR-ISASI, Italy)
Ana F. Sequeira (INESC, Portugal)
André Dörsch (Hochschule Darmstadt, Germany)
Anubhooti Jain (IIT Jodhpur, India)
Anudeep Vurity (George Mason University, USA)
Christian Rathgeb (Hochschule Darmstadt, Germany)
Colton R Crum (University of Notre Dame, USA)
Eli J Laird (Southern Methodist University, USA)
Ivan DeAndres-Tame (Universidad Autónoma de Madrid, Spain)
Marco Leo (National Research Council of Italy)
Rishabh Ranjan (IIT Jodhpur, India)
Ruben Vera-Rodriguez (Universidad Autónoma de Madrid, Spain)
Subhankar Ghosh (University of Technology Sydney, Australia)
Zitong Yu (Great Bay University, China)