IJCAI 2019 Workshop on Explainable Artificial Intelligence (XAI)

11 August, 2019. Macau, China


Workshop Proceedings

View/download Proceedings

Workshop Program

final program.xlsx

Invited Keynote Speaker Ruth Byrne

Talk: “Constraints on Counterfactuals”

Ruth Byrne is the Professor of Cognitive Science at Trinity College Dublin, University of Dublin, in the School of Psychology and the Institute of Neuroscience. Her research expertise is in the cognitive science of human thinking, including experimental and computational investigations of reasoning and imaginative thought. She has published about 150 articles on the cognitive science of thinking and her books include 'The Rational Imagination: How People Create Alternatives to Reality' published in 2005 by MIT press.

Check out Professor Byrne's pages:

Home page: https://psychology.tcd.ie./people/rmbyrne/

Lab page: https://reasoningandimagination.com/

Important Dates

Paper submission: 19 May, 2019 24 May, 2019

Notification: 19 June 2019

Camera-ready submission: 17 July, 2019

Submission Details

Authors may submit *long papers* (6 pages plus up to one page of references) or *short papers* (4 pages plus up to one page of references).

All papers should be typeset in the IJCAI style (https://www.ijcai.org/authors_kit). Accepted papers will be published on the workshop website.

Papers must be submitted in PDF format via the EasyChair system (https://easychair.org/conferences/?conf=xai19).


30 May: We have had an incredible 53 submissions to the workshop! After some running around, we have managed to recruit additional program committee members to help with the reviews.

17 May: The submission deadline has been extended to 24 May.

26 April: Our sibling conference on explainable AI planning (XAIP) is currently conducting reviews using the Open Review platform. Feel free to view the papers and provide feedback: https://openreview.net/group?id=icaps-conference.org/ICAPS/2019/Workshop/XAIP

Program Committee

Ajay Chander, Fujitsu Labs of America

Jialin Wu, The University of Texas at Austin

Amro Najjar, Emse

Christine T. Wolf, IBM Research, Almaden

Ronal Singh, The University of Melbourne

Mark Roberts, Naval Research Laboratory

Michael Floyd, Knexus Research

Dimitrios Letsios, Imperial College London

Or Biran, Columbia University

Rebekah Wegener, Salzburg University

Jörg Cassens, University of Hildesheim

Max Yiren, Arizona State University

Rosina Weber, Drexel University

Freddy Lecue, Accenture Labs

Ben Wright, New Mexico State University

Isaac Lage, Harvard University

Kacper Sokol, University of Bristol

Daniel Le Métayer, INRIA

Ninghao Liu, Texas A&M University

Ian Watson, "University of Auckland New Zealand"

Ramya Srinivasan, Fujitsu Laboratories of America

David Leake, Indiana University Bloomington

Krysia Broda, Imperial College

Denise Agosto, Drexel University

Shane Mueller, Michigan Technological University

Dustin Dannenhauer, Navatek LLC

Liz Sonenberg, The University of Melbourne

Brian Lim, Fraunhofer Center for Sustainable Energy Systems

Mark Keane, UCD Dublin

Fabio Mercorio, University of Milano Bicocca

Michael Cashmore, King's College London

Nirmalie Wiratunga, The Robert Gordon University

Belen Diaz-Agudo, Universidad Complutense de Madrid

Peter Flach, University of Bristol

Prashan Mathugama Babun Apuhamilage, The University of Melbourne

Riccardo Guidotti, University of Pisa

Martin Oxenham, Defence Science and Technology Organisation

Ofra Amir, Harvard University

Maarten de Rijke, University of Amsterdam

Alun Preece, Cardiff University

Mor Vered, Bar Ilan University

Steven Wark, DST Group

Christin Seifert, University of Twente

Sambit Bhattacharya, Fayetteville State University

Jianlong Zhou, University of Technology Sydney

Tathagata Chakraborti, IBM Research AI

Yezhou Yang, Arizona State University

Emma Baillie, The University of Melbourne

Cristina Conati, The University of British Columbia

Daniele Magazzeni, King's College London

Simon Parsons, King's College London

Juan Recio-Garcia, Universidad Complutense de Madrid

Ariel Rosenfeld, Bar-Ilan University

Michael Winikoff, University of Otago

Workshop organisers

Tim Miller (University of Melbourne, Australia): Primary contact: tmiller@unimelb.edu.au

Rosina Weber (Drexel University)

David Aha (NRL, USA)

Daniele Magazzeni (King’s College London)

Call for papers

Paper submission deadline extended to 24 May, 2019!

As AI becomes more ubiquitous, complex and consequential, the need for people to understand how decisions are made and to judge their correctness becomes increasingly crucial due to concerns of ethics and trust. The field of Explainable AI (XAI), aims to address this problem by designing AI whose decisions can be understood by humans.

This workshop brings together researchers working in explainable AI to share and learning about recent research, with the hope of fostering meaningful connections between researchers from diverse backgrounds, including but not limited to artificial intelligence, human-computer interaction, human factors, philosophy, cognitive & social psychology.

This meeting will provide attendees with an opportunity to learn about progress on XAI, to share their own perspectives, and to learn about potential approaches for solving key XAI research challenges. This should result in effective cross-fertilization among research on ML, AI more generally, intelligent user interaction (interfaces, dialogue), and cognitive modeling.


Topics of interest include but are not limited to:

Technologies and Theories

· Explainable Machine learning (e.g., deep, reinforcement, statistical, relational, transfer, case-based)

· Explainable Planning

· Human-agent explanation

· Human-behavioural evaluation for XAI

· Psychological and philosophical foundations of explanation

· Interaction design and XAI

· Historical perspectives of XAI

· Cognitive architectures

· Commonsense reasoning

· Decision making

· Episodic reasoning

· Intelligent agents (e.g., planning and acting, goal reasoning, multiagent architectures)

· Knowledge acquisition

· Narrative intelligence

· Temporal reasoning


· After action reporting

· Ambient intelligence

· Autonomous control

· Caption generation

· Computer games

· Explanatory dialog design and management

· Image processing (e.g., security/surveillance tasks)

· Information retrieval and reuse

· Intelligent decision aids

· Intelligent tutoring

· Legal reasoning

· Recommender systems

· Robotics

· User modeling

· Visual question-answering (VQA)