Many publicly available face analytics datasets are responsible for great progress in face recognition. These datasets serve as source of large amounts of training data as well as assessing performance of state-of-the-art competing algorithms. Performance saturation on such datasets has led the community to believe the face recognition and attribute estimation problems to be close to be solved, with various commercial offerings stemming from models trained on such data.
However, such datasets present significant biases in terms of both subjects and image quality, thus creating a significant gap between their distribution and the data coming from the real world. For example, many of the publicly available datasets underrepresent certain ethnic communities and over represent others. Most datasets are heavily skewed in age distribution. Many variations have been observed to impact face recognition including, pose, low-resolution, occlusion, age, expression, decorations and disguise. Systems based on a skewed training dataset are bound to produce skewed results. This mismatch has been evidenced in the significant drop in performance of state of the art models trained on those datasets when applied to images presenting lower resolution, poor illumination, or particular gender and/or ethnicity groups. It has been shown that such biases may have serious impacts on performance in challenging situations where the outcome is critical either for the subject or to a community. Often research evaluations are quite unaware of those issues, while focusing on saturating the performance on skewed datasets.
In order to progress toward fair face recognition and attribute estimation truly in the wild, we encourage the development of systems that provide uniform performance across multiple factors: age, gender, ethnicity, pose, resolution, etc. We also encourage the submission of extended abstracts with interesting evaluations, provocative viewpoints, and novel ideas to tackle the bias problem in face analytics.
The goal of this workshop is to:
· Assess the current state of face analytics (face recognition, attributes estimation) algorithms to pinpoint their inherent biases
· Facilitate the creation of bias-aware and (as much as possible) bias-free models that can achieve and maintain high performance invariably across multiple groups
· Facilitate a cross-disciplinary discussion around bias in face analytics, involving technical viewpoints not only from the computer vision community, but also from other disciplines
An honest and scientific look at the state of the art and a discussion around diversity, fairness and bias in face analytics is expected to shed light and suggest directions that can improve fair face recognition and attributes prediction by suggesting training dataset collection techniques as well as model architectures, algorithms and evaluation protocols.
Topics of interest include (but are not limited to)
- new algorithms and architectures explicitly designed to reduce bias in face analytics
- new techniques to balance/manipulate data to reduce bias in face analytics
- new datasets to improve and measure bias/diversity in face analytics
- new evaluation protocols to assess and measure bias/diversity in face analytics
- generative methods to reduce bias in face analytics
- evaluations of bias/diversity of state of the art techniques in face analytics
- transfer learning/domain adaptation techniques for more fair face analytics
Technical Papers and Extended Abstracts Submission:
March 20, 2019 March 25, 2019
Paper Decision to Authors: April 9, 2019
Camera Ready Papers Due: April 19, 2019