Princeton University
Strategies for mitigating social bias in visual recognition
NIST
Demographics effects in leading commercial face recognition algorithms
The talk will give extensive empirical results for demographic dependence of leading commercial face recognition algorithms performing one-to-one verification and one-to-many identification. This will be prefaced by discussions of: the points at which demographic effects can occur; metrics; impacts and application dependence; and appropriate datasets. The talk will conclude by discussing mitigation and outlining a research agenda in support of that.
The University of Texas at Dallas
Other-race effects for face recognition algorithms: Strategies for measuring and minimizing bias
Under certain conditions, both humans and machines show an other-race effect for face recognition (FR). For humans, the other-race effect is the phenomenon that we recognize faces of our own race more accuracy than faces of other races. For machines, it is the finding that accuracy is generally best for faces of the race that constitutes the majority of the training faces. For previous generations of FR algorithms, the easiest fix for this was to equalize the training data to better represent all races of faces equally. In the era of convolutional neural networks and big data, this fix is not usually an option. In this talk, I will briefly outline what is known about race bias in algorithms, and will discuss the importance of accurately measuring bias in a FR algorithm. I will also discuss strategies for overcoming bias that do not rely on equating data volume across multiple races.