Invited Speakers and Panelists

Invited Speakers

Zachary Chase Lipton (Carnegie Mellon University)

Assistant Professor of Machine Learning and Operations Research at Carnegie Mellon University (CMU).

Title: Responsible AI's Causal Turn

Abstract: With widespread excitement about the capability of machine learning systems, this technology has been instrumented to influence an ever-greater sphere of societal systems, often in contexts where what is expected of the systems goes far beyond the narrow tasks on which their performance was certified. Areas where our requirements of systems exceed their capabilities include (i) robustness and adaptivity to changes in the environment, (ii) compliance with notions of justice and non-discrimination, and (iii) providing actionable insights to decision-makers and decision subjects. In all cases, research has been stymied by confusion over how to conceptualize the critical problems in technical terms. And in each area, causality has emerged as a language for expressing our concerns, offering a philosophically coherent formulation of our problems but exposing new obstacles, such as an increasing reliance on stylized models and a sensitivity to assumptions that are unverifiable and (likely) unmet. This talk will introduce a few recent works, providing vignettes of reliable ML’s causal turn in the areas of distribution shift, fairness, and explainability/transparency research.

Q. Vera Liao (Microsoft Research)

Principal Researcher at Microsoft Research Montréal, FATE (Fairness, Accountability, Transparency, and Ethics of AI) group. 

Title: Towards Human-Compatible Explainable AI

Abstract: While a vast collection of explainable AI (XAI) techniques have been developed in recent years, human-computer interaction (HCI) studies have found mixed results of their effectiveness, even pitfalls, in helping people work with AI systems. There is often a lack of compatibility with, and to begin with, a lack of understanding of, how people process and make use of explanations. In this talk, I will first draw on our own work and the broader HCI research to provide a more principled understanding of how people use AI explanations, focusing on the context of AI-assisted decision support. I will then suggest a path to more human-compatible XAI by drawing inspiration from human explanation behaviors, and encourage the community to pay more attention to the communication of explanations.

Tim Miller (University of Queensland)

Professor of Artificial Intelligence in the School of Electrical Engineering and Computer Science at The University of Queensland, Meaanjin/Brisbane, Australia.

Title: Explainable AI is dead! Long live Explainable AI!

Abstract: In this talk, I argue that we need to re-frame how we conceptualise explainable decision support. Recent research shows that the current paradigm of giving a recommendation/prediction and explaining it does not really improve human decision making (with some exceptions). I explain why I believe this is the case, and propose a machine-in-the-loop conceptualisation of decision support, which I call Evaluative AI. The focus in evaluative AI is to put the human decision maker in control of the decision loop, and to have decision aids provide evidence to help to help decision makers confirm/deny their hypothesis; as opposed to providing a decision and explanation. This approach of ‘decision support as supporting the decision making process’ is different from ‘decision support as proposing decisions’, but the technical tools required for it look a lot like current XAI tools.

Panelists

Kristijonas Čyras (Ericsson AI Research)

AI researcher in Machine Reasoning, Argumentation, Explainable and Trustworthy AI at Ericsson Telecommunications Inc.

Natraj Raman (J.P. Morgan AI Research)

AI Research Director at JP Morgan AI Research

Moderated by:

Francesca Toni (Imperial College London)

Professor in Computational Logic at the Department of Computing, Imperial College London and Royal Academy of Engineering / J.P. Morgan Research Chair in Argumentation-based Interactive Explainable AI