IJCAI-PRICAI 2020 Workshop on Explainable Artificial Intelligence (XAI)

Date: 8 January (Japan Standard Time)

*Important: note agenda with timezones below

https://www.ijcai20.org/

Submission deadline: 15 June, 2020

Important Dates

Paper submission: 15 June, 2020

Notification: 1 August, 2020

Camera-ready submission: 1 September, 2020

Workshop date: 8 January, 2021

Schedule

Join the workshop's slack channel:

https://join.slack.com/t/xai-ijcai2020/shared_invite/zt-ks16yash-aAbQ6Gb0LVOO~ARUkbu8jw

Americas Agenda (in EST timezone)

7:00pm 7 January - 12:05am 8 January, Americas agenda in EST

7:00-7:05pm: Welcome: David W. Aha (NRL, USA) and Rosina Weber (Drexel University, USA)

Session 1: Human-Centered XAI
7:05-7:45pm: Invited Talk: Identifying Explainable AI approaches by studying Physician Explanation Strategies in Re-diagnosis Scenarios, Shane Mueller (Michigan Technological U., USA)
7:50-8:02pm: Human-Centered Explanation for Goal Recognition
Abeer Alshehri, Tim Miller, Mor Vered, and Hajar Alamri
(University of Melbourne, Australia; Monash U., Australia; & King Khalid U., Saudi Arabia)

8:05-8:15pm: Break (10min)

Session 2: User Interaction and Agent Design
8:15-8:27pm: Impact of Explanations for AI-driven hints in an Intelligent Tutoring System
Cristina Conati, Oswald Barral, Vanessa Putnam, and Lea Rieger
(UBC, Canada & Augsburg U., Germany)
8:30-8:42pm: Teaching Humans with Justifications of Monte Carlo Tree Search Decisions
Cleyton R. Silva, Levi H. S. Lelis, and Michael Bowling
(U. Federal de Viçosa, Brazil; & Alberta Machine Intelligence Institute, Canada
8:45-8:57pm: Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks
Xiaofeng Gao, Ran Gong, Yizhou Zhao, Shu Wang, Tianmin Shu, and Song-Chun Zhu
(UCLA, USA & MIT, USA)
9:00-9:12pm: Design for Explicability
Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David E. Smith, and Subbarao Kambhampati
(Arizona State U., USA; Harvard U., USA & IBM Research AI, USA)

9:15-9:45pm: Break (30 min)

Session 3: Reinforcement Learning
9:45-10:25pm: Invited Talk: Don't Get Fooled by Explanations Alan Fern (Oregon State U., USA)
10:30-10:42pm: Identifying Reasoning Flaws in Planning-Based RL Using Tree Explanations
Kin-Ho Lam, Zhengxian Lin, Jed Irvine, Jonathan Dodge, Zeyad T Shureih, Roli Khanna, Minsuk Kahng, and Alan Fern
(Oregon State U., USA)

10:45-11:00pm: Break (15 min)

Session 4: Machine Learning
11:00-11:40pm: Invited Talk: Explainable, Interpretable Machine Learning using Cutset Networks Vibhav Gogate (U. Texas @ Dallas, USA)
11:45-11:57pm: Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for Learned Systems
David Bayani and Stefan Mitsch
(CMU, USA)

12:00-12:05am: Wrap-up David W. Aha (NRL, USA) and Rosina Weber (Drexel University, USA)

European agenda (in UCT timezone)

8:00am-1:00pm 8 January, European agenda in UTC

8:00-8:05am: Welcome: Ofra Amir (Technion)

Session 1: Machine learning
8:05-8:20am: Explainable Feature Ensembles through Homogeneous and Heterogeneous Intersections
Avi Rosenfeld and Matanya Freiman
8:20-8:35am: A Performance-Explainability Framework to Machine Learning Methods: Application to Multivariate Time Series Classifiers
Kevin Fauvel , Veronique Masson and Elisa Fromont
8:35-8:50am: Explaining Automated Data Cleaning with CLeanEX
Laure Berti-Equille and Ugo Comignani
8:50-9:05am: Machine Learning Explainability for External Stakeholders
Umang Bhatt, McKane Andrus, Adrian Weller, and Alice Xiang

9:05-9:20am: Break (15 mins)

Session 2: Explainable plans, policies and search

9:20-9:35am: Explainable Search
Hendrik Baier and Michael Kaisers

9:35-9:50am: Combining Local Saliency Maps and Global Strategy Summaries for Reinforcement Learning Agents
Tobias Huber, Katharina Weitz, Elisabeth Andre, and Ofra Amir

9:50-10:05am: Teaching Explainable Strategies in Cooperative Settings
Uzi Friedman, Kobi Gal, Levi Lelis, and Jonathan Martinez

10:05-10:20am: Explaining plans at scale: scalable path planning explanations in navigation meshes using inverse optimization
Martim Brandao and Daniele Magazzeni

10:20-10:50am: Break (30 mins)

Session 3: Cognitive perspectives, decision-theory
10:50-11:05am: Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
Courtney Ford, Eoin M. Kenny and Mark T. Keane
11:05-11:20am: Cognitive Perspectives on Context-based Decisions and Explanations
Marcus Westberg and Kary Framling
11:20-11:35am: Py-CIU: A Python Library for Explaining Machine Learning Predictions Using Contextual Importance and Utility
Sule Anjomshoae, Timotheus Kampik, and Kary Framling

11:35-11:50am: Break (15 mins)

Session 4: User-centered explanations
11:50am-12:50pm: Invited talk: Simone Stumpf (City University London, UK), "What a great team! How AI and HCI can work together to build explanations"


12:50-1:00pm: Wrap up: Ofra Amir (Technion)


Proceedings

Impact of Explanations for AI-driven hints in an Intelligent Tutoring System
Cristina Conati, Oswald Barral, Vanessa Putnam, and Lea Rieger

Explainable Feature Ensembles through Homogeneous and Heterogeneous Intersections
Avi Rosenfeld and Matanya Freiman

A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers
Kevin Fauvel , Veronique Masson and Elisa Fromont

Explainable Search
Hendrik Baier and Michael Kaisers

Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
Courtney Ford, Eoin M. Kenny and Mark T. Keane

Combining Local Saliency Maps and Global Strategy Summaries for Reinforcement Learning Agents
Tobias Huber, Katharina Weitz, Elisabeth Andre, and Ofra Amir

Explaining Automated Data Cleaning with CLeanEX
Laure Berti-Equille and Ugo Comignani

Identifying Reasoning Flaws in Planning-Based RL Using Tree Explanations
Kin-Ho Lam, Zhengxian Lin, Jed Irvine, Jonathan Dodge, Zeyad T Shureih, Roli Khanna, Minsuk Kahng, and Alan Fern

Teaching Humans with Justifications of Monte Carlo Tree Search Decisions
Cleyton R. Silva, Levi H. S. Lelis, Michael Bowling

Design for Explicability
Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David E. Smith, and Subbarao Kambhampati

Joint Mind Modeling for Explanation Generation in complex Human-Robot Collaborative Tasks
Xiaofeng Gao, Ran Gong, Yizhou Zhao, Shu Wang, Tianmin Shu, and Song-Chun Zhu

Py-CIU: A Python Library for Explaining Machine Learning Predictions Using Contextual Importance and Utility
Sule Anjomshoae, Timotheus Kampik, and Kary Framling

Teaching Explainable Strategies in Cooperative Settings
Uzi Friedman , Kobi Gal, Levi Lelis, and Jonathan Martinez

Human-Centered Explanation for Goal Recognition
Abeer Alshehri, Tim Miller, Mor Vered, and Hajar Alamri

Explaining plans at scale: scalable path planning explanations in navigation meshes using inverse optimization
Martim Brandao and Daniele Magazzeni

Cognitive Perspectives on Context-based Decisions and Explanations
Marcus Westberg and Kary Framling

Machine Learning Explainability for External Stakeholders
Umang Bhatt, McKane Andrus, Adrian Weller, and Alice Xiang

Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for Learned Systems (supplementary material)
David Bayani and Stefan Mitsch

Submission Details

Authors may submit *long papers* (6 pages plus up to unlimited pages of references) or *short papers* (4 pages plus up to unlimited page of references).

All papers should be typeset in the IJCAI style (https://www.ijcai.org/authors_kit). Accepted papers will be made available on the workshop website. Accepted papers will not be published in archival proceedings.

Reviews are double blind, so no identifying information should be on the papers.

Submission link: https://openreview.net/group?id=ijcai.org/IJCAI-PRICAI/2020/Workshop/XAI

News!

3 September: The proceedings for the workshop are available now

19 August: Camera ready version for accepted papers are due on 1 September

27 May: Due to the recently-recognised clash with CSCW and NeurIPs, we're extending the deadline until 15 June.

13 May: The XAI workshop will still be going ahead, with submission and review continuing on the current schedule to allow authors to submit and get feedback. Accepted papers will be uploaded to the website in July, and the workshop will be held with IJCAI in January.

22 April: We have extended the deadline by 4 weeks. New date: 29 May, 2020

11 March: Great news! The XAI workshops has been accepted at IJCAI for another year.

Program Committee

Alan Fern, Oregon State University

Alun Preece, Cardiff University

Christine T. Wolf, IBM Research, Almaden

Cristina Conati, The University of British Columbia

Daniel Le Métayer, INRIA

David Aha, Naval Research Laboratory, USA

David Leake, Indiana University Bloomington

Denise Agosto, Drexel University

Emma Baillie, The University of Melbourne

Fabio Mercorio, University of Milano Bicocca

Freddy Lecue, Accenture Labs

Ian Watson, University of Auckland, New Zealand

Isaac Lage, Harvard University

Jiahao Chen, J.P. Morgan AI Research

Jianlong Zhou, University of Technology, Sydney

Jörg Cassens, University of Hildesheim

Juan Recio-Garcia, Universidad Complutense de Madrid

Kacper Sokol, University of Bristol

Krysia Broda, Imperial College

Liz Sonenberg, University of Melbourne

Loizos Michael, Open University of Cyprus

Mark Roberts,Naval Research Laboratory

Mark Keane, UCD Dublin

Mark Hall, Airbus

Martin Oxenham, Defence Science and Technology Organisation

Michael Floyd, Knexus Research

Michael Winikoff, University of Otago

Ninghao Liu, Texas A&M University

Patrick Shafto, Rutgers University

Peter Flach, University of Bristol

Peter Vamplew, Federation University

Ramya Srinivasan, Fujitsu Laboratories of America

Rebekah Wegener, Salzburg University

Riccardo Guidotti, University of Pisa

Richard Dazeley, Deakin University

Ronal Singh, The University of Melbourne

Rosina Weber, Drexel University

Ruihan Zhang, The University of Melbourne

Sarath Sreedharan, Arizona State University

Shane Mueller, Michigan Technological University

Simon Parsons, King's College London

Yezhou Yang, Arizona State University


Workshop organisers

Tim Miller (University of Melbourne, Australia): Primary contact: tmiller@unimelb.edu.au

Rosina Weber (Drexel University)

David Aha (NRL, USA)

Daniele Magazzeni (King’s College London and J.P. Morgan)

Ofra Amir (Technion)

Call for papers

As AI becomes more ubiquitous, complex and consequential, the need for people to understand how decisions are made and to judge their correctness becomes increasingly crucial due to concerns of ethics and trust. The field of Explainable AI (XAI), aims to address this problem by designing AI whose decisions can be understood by humans.

This workshop brings together researchers working in explainable AI to share and learning about recent research, with the hope of fostering meaningful connections between researchers from diverse backgrounds, including but not limited to artificial intelligence, human-computer interaction, human factors, philosophy, cognitive & social psychology.

This meeting will provide attendees with an opportunity to learn about progress on XAI, to share their own perspectives, and to learn about potential approaches for solving key XAI research challenges. This should result in effective cross-fertilization among research on ML, AI more generally, intelligent user interaction (interfaces, dialogue), and cognitive modeling.

Topics

Topics of interest include but are not limited to:

Technologies and Theories

· Explainable Machine learning (e.g., deep, reinforcement, statistical, relational, transfer, case-based)

· Explainable Planning

· Human-agent explanation

· Human-behavioural evaluation for XAI

· Psychological and philosophical foundations of explanation

· Interaction design and XAI

· Historical perspectives of XAI

· Cognitive architectures

· Commonsense reasoning

· Decision making

· Episodic reasoning

· Intelligent agents (e.g., planning and acting, goal reasoning, multiagent architectures)

· Knowledge acquisition

· Narrative intelligence

· Temporal reasoning

Applications/Tasks

· After action reporting

· Ambient intelligence

· Autonomous control

· Caption generation

· Computer games

· Explanatory dialog design and management

· Image processing (e.g., security/surveillance tasks)

· Information retrieval and reuse

· Intelligent decision aids

· Intelligent tutoring

· Legal reasoning

· Recommender systems

· Robotics

· User modeling

· Visual question-answering (VQA)