New date for the workshop: 7th of September 2020 - Online
Abstract: In his seminal book The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity, Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge. As a result, programmers design software that works for themselves, rather than for their target audience; a phenomenon he refers to as the ‘inmates running the asylum’. In this talk, I argue that explainable AI risks a similar fate if AI researchers and practitioners do not take a multi-disciplinary approach to explainable AI. I further assert that to do this, we must understand, adopt, implement, and improve models from the vast and valuable bodies of research in philosophy, psychology, and cognitive science; and focus evaluation on people instead of just technology. I paint a picture of what I think the future of explainable AI will look like if we went down this path, and give some concrete examples from our recent research in explainable reinforcement learning.
Bio: Tim is an associate professor of computer science in the School of Computing and Information Systems at The University of Melbourne, and Co-Director for the Centre of AI and Digital Ethics. His primary area of expertise is in artificial intelligence, with particular emphasis on human-AI interaction and collaboration and Explainable Artificial Intelligence (XAI). His work is at the intersection of artificial intelligence, interaction design, and cognitive science/psychology
Abstract: Providing high quality explanations for AI predictions is a challenging task. It requires, among other elements, selecting a proper level of generality/specificity of the explanation, referring to specific elements that have contributed to the decision, and providing evidence supporting negative hypothesis. In this talk, I will present some results achieved in the area of Argument Mining and Argument Generation and how these results can be exploited to generate high quality explanatory dialogues crucially based on argumentation mechanisms.
Abstract: Recommendation systems are everywhere in our daily life, simplifying human decision making by providing suggestions as to where to go and what to buy, read, or watch. Whilst these systems can be helpful in discovering items relevant to our interests, they tend to lack explainability, making it difficult for users to understand the rationale behind recommendations and thus correct them when necessary.
In this talk, I will present how argumentation frameworks can be extracted from various types of data obtained from recommendations engines. I will then show how these frameworks can be used for making accurate recommendations in the context of movie recommender systems. Finally, I will discuss how argumentative explanations can be extracted from such frameworks to reason about recommendations.
In the EU-funded MuMMER project, we have developed a social robot designed to interact naturally and flexibly with users in a public space. The robot system encompasses state-of-the-art components for audio-visual sensing, social signal processing, conversational interaction, perspective taking, geometric reasoning, and motion planning. The final MuMMER robot system was deployed in a shopping mall in Finland for 14 weeks, where it interacted with a wide range of customers. In this talk, I will describe the components of the MuMMER system and the supported robot behaviours and scenarios. I will also present the details, results, and lessons learned from the final long-term robot deployment.
Abstract: In the last few years, we have been working on the development of an argumentation-based framework on the top of an agent-oriented programming language. Our framework allows us to implement multi-agent applications powered by argumentation-based reasoning and dialogues. In this talk, we present an overview of what we have done, pointing out the next steps towards dialogues, explanation and argumentation in human-agent interaction.
In argumentation theory, argument schemes are constructs to generalise common patterns of reasoning; whereas critical questions (CQs) capture the reasons why argument schemes might not generate arguments. Argument schemes together with CQs are widely used to instantiate arguments; however when it comes to making decisions, much less attention has been paid to the attacks among arguments. This paper provides a high-level description of the key elements necessary for the formalisation of argumentation frameworks such as argument schemes and CQs. Attack schemes are then introduced to represent attacks among arguments, which enable the definition of domain-specific attacks. One algorithm is articulated to operationalise the use of schemes to generate an argumentation framework, and another algorithm to support decision making by generating domain-specific explanations. Such algorithms can then be used by agents to make recommendations and to provide explanations for humans. The applicability of this approach is demonstrated within the context of a medical case study.