Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love human-centred approaches
Abstract: In his seminal book “The Inmates are Running the Asylum”, Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge. As a result, programmers design software that works for themselves, rather than for their target audience; a phenomenon he refers to as the ‘inmates running the asylum’. In this talk, I argue that explainability in AI and robotics risks a similar fate if researchers and practitioners do not embrace a human-centred approach. I further assert that to do this, we must understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and focus evaluation on people instead of just technology. I will discuss some key theories on explanation in the social sciences, and will present some key examples of how we have used these in our research in sequential decision making.
Bio: Tim is a professor in the School of Electrical Engineering and Computer Science at The University of Queensland, Brisbane, Australia. His primary area of expertise is in artificial intelligence, with particular emphasis on: human-AI interaction, AI-assisted decision support, and Explainable AI (XAI). His work is at the intersection of artificial intelligence, interaction design, and cognitive science/psychology. Prior to his appointment at The University of Queensland, Tim was a professor of computer science in the School of Computing and Information Systems at The University of Melbourne, where he was founding co-director of The Centre for AI and Digital Ethics.