Katina Michael
Katina Michael BIT, MTransCrimPrev, PhD (Senior Member IEEE, ACM SIGCAS), is a Professor with Arizona State University and a Senior Global Futures Scientist with the Global Futures Laboratory. At ASU, she has a joint appointment with the School for the Future of Innovation in Society and School of Computing and Augmented Intelligence. Katina’s research focuses on the social implications of emerging technologies. She was responsible for establishing the Human Factors Series in the Research Network for a Secure Australia (RNSA 2005-2009), was an external member of the Centre of Excellence in Policing and Security (CEPS 2009-2013), and ran the Social Implications of National Security (SINS) workshops from 2006 to 2022. Since 2021, Katina has advised DARPA on matters pertaining to ethics, law, and societal implications (ELSI) of complex socio-technical systems. She has been funded by the National Science Foundation, the Canadian Social Sciences and Humanities Research Council, and the Australian Research Council. She is the Director of the Society Policy Engineering Collective, the Founding Editor-in-Chief of the IEEE Transactions on Technology and Society and was formerly Editor-in-Chief of the IEEE Technology and Society Magazine and Editor at Computers & Security. She is the Founding Chair of the ASU Master of Science in Public Interest Technology, and Technical Committee Co-Chair of Socio-Technical Systems at IEEE Society on the Social Implications of Technology (IEEE SSIT). Prior to academia, Katina was employed by Nortel Networks, Anderson Consulting, and OTIS Elevator Company.
Keynote Title : Racial and Genetic Discrimination in Automated Face Analysis
Abstract: Biometric recognition systems, especially facial recognition systems (FRS), have proliferated in a range of application areas and are widely accepted across sectors. FRS are being used for identification and verification of individuals in policing and border security contexts, and increasingly for industry, such as in the supply chain as a loss prevention mechanism, or in retail with respect to electronic payments. One of the many benefits of facial recognition is the contactless nature of the technique, despite that its accuracy is generally lower than iris or fingerprint recognition. But something makes the face that much more appealing as a biometric marker, especially that the face can translate akin to one’s DNA (e.g., Face2Gene). The face contains unique facial features (phenotypic traits) that can be linked to particular genetic conditions (genotypes), as many of the characteristics for particular syndromes have already been discovered through pattern recognition of patient photographs. The face is also expressive and emotion detection systems can denote how one is feeling, one’s general disposition, and state of wellbeing (i.e., mental health). The main purpose of automated face analysis is to gather actionable insights from facial expressions and facial feature positioning. The proliferation of tens of billions of facial images on the Internet (much of it volunteered by citizens), and video surveillance mechanisms that possess firmware to capture and process facial images in near real-time, have paved the way for AI approaches that can now compute a diagnosis merely from frontal photographs. In this presentation, I consider the ethical, legal and social implications of automated face analysis techniques being embedded in computer vision systems, without the consent of the citizen or worker, and what this might mean in the context of racial and genetic discrimination.