IJCAI 2022
Workshop on Explainable Artificial Intelligence (XAI)
Date: Saturday, 23 July, 2022
Submission deadline: 13 May 2022, 11:59pm anywhere on Earth
Important Dates
Paper submission: 11:59pm, 8 May, 13 May 2022, timezone: Anywhere on Earth
Notification: 3 June, 2022
Camera-ready submission: 23 June, 2022
Workshop date: Saturday, 23 July, 2022
Submission Details
Authors may submit *long papers* (6 pages plus up to unlimited pages of references) or *short papers* (4 pages plus up to unlimited page of references).
All papers should be typeset in the IJCAI style (https://www.ijcai.org/authors_kit). Accepted papers will be made available on the workshop website. Accepted papers will not be published in archival proceedings. This means that you can submit your paper to another venue after the workshop.
The policy of the IJCAI conference is that the conference is an in-person event (https://ijcai-22.org/register/). This policy extends to workshops and all other events. As such, one author of each paper is required to attend the workshop.
Reviews are double blind, so no identifying information should be on the papers.
Authors can submit papers at the XAI2022 Easychair site: https://easychair.org/conferences/?conf=xaiijcai22
News!
17 March: Great news! The XAI workshops has been accepted at IJCAI for 2022!
29 April: More great news! Dr Q. Vera Liao, Principal Researcher at Microsoft Research Montréal, will give an invited talk at the XAI workshop.
2 May: By popular demand, we have extended the submission deadline to 13 May.
4 June: We have accepted 26 papers for the conference. Congratulations to all those who were successful; commiserations to others who weren't so lucky this time.
6 July: A draft schedule for the workshop has been released. See below.
21 July: The schedule for the workshop has been finalised. See below.
Schedule
Invited speaker
Speaker: Dr Q. Vera Liao, Principal Researcher at Microsoft Research Montréal.
Title: Human-Centered Explainable AI: From Algorithms to User Experiences
Abstract: Artificial Intelligence technologies are increasingly used to make decisions and perform autonomous tasks in critical domains. The need to understand AI in order to improve, contest, develop appropriate trust, and better interact with AI systems has spurred great academic and public interest in Explainable AI (XAI). The technical field of XAI has produced a vast collection of algorithms in recent years. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are increasingly important especially as practitioners begin to leverage XAI algorithms to build XAI applications. In this talk, I will draw on my own research and broad HCI works to highlight the central role that human-centered approaches should play in shaping XAI technologies, including driving technical choices by understanding users’ explainability needs, uncovering pitfalls of existing XAI methods, and providing conceptual frameworks for human-compatible XAI.
Bio: Q. Vera Liao is a Principal Researcher at Microsoft Research Montréal, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics of AI) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI. Prior to joining MSR, she worked at IBM T.J. Watson Research Center, and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Her research received multiple paper awards at ACM CHI and IUI. She currently serves as the Co-Editor-in-Chief for Springer HCI Book Series, in the Editors team for ACM CSCW conferences, and on the Editorial Board of ACM Transactions on Interactive Intelligent Systems (TiiS).
Workshop organisers
Rosina Weber (Drexel University)
Ofra Amir (Technion)
Tim Miller (University of Melbourne, Australia) tmiller@unimelb.edu.au
Call for papers
The Explainable AI (XAI) workshop is interested in providing a forum for discussing recent research on XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI, human-computer interaction, and cognitive theories of explanation and transparency. This topic is of particular importance but not limited to machine learning, AI planning, and knowledge reasoning & representation.
Explainable Artificial Intelligence (XAI) addresses the challenge of how to communicate (to humans) the models used in AI systems and their specific decisions. While this is of interest to AI researchers studying a broad set of topics (e.g., agents, knowledge representation and reasoning, planning, recommender systems, reasoning with uncertainty, robotics), it is of particular concern to machine learning (ML) researchers because, in many situations, practitioners want to understand the AI system’s decision making prior to trusting its use in critical applications (e.g., automated driving, command and control, finance, medicine, healthcare).
While AI researchers have experienced many recent successes, most studies have focused on measures of prediction performance (e.g., using AUC, F-scores, mAP, or accuracy measures) rather than explanation (and justification) for these predictions (e.g., involving user satisfaction, mental model alignment, or human-system task performance). This is problematic for applications in which users seek to understand before committing to decisions with inherent risk. For example, a delivery drone should explain (to its remote operator) why it is operating normally or why it suspends its behavior (e.g., to avoid placing its fragile package on an unsafe location), and an intelligent decision aid should explain its recommendation of an aggressive medical intervention (e.g., in reaction to a patient’s recent health patterns). The need for explainable models increases as AI systems are deployed in critical applications.
The need for interpretable models exists independently of how the models were acquired (i.e., perhaps they were hand-crafted, or interactively elicited without using ML techniques). This raises several questions, such as: how should explainable models be designed? What queries should AI systems be able to answer about their models and decisions? How should user interfaces communicate decision making? What types of user interactions should be supported? And how should explanation quality be assessed?
In addition to encouraging descriptions of original or recent contributions to XAI (i.e., theory, simulation studies, subject studies, demonstrations, applications), we will welcome contributions that: survey related work; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them.
Topics
We are particularly interested in papers that draw out cross-disciplinary problems and solutions to explainability.
Topics of interest include, but are not limited to, the following:
Explainable and interpretable machine learning
Explainable planning
Explainable agency (e.g., planning and acting, goal reasoning, multi-agent systems)
Human-agent explanation
Approaches for evaluation of XAI approaches
Applied case studies on XAI
Psychological and philosophical foundations of explanation
Social aspects of XAI
XAI and social-, behavioural- and psychological-oriented disciplines
Historical perspectives of XAI and surveys
Commonsense reasoning
Decision making and sensemaking
Actionable recourse
Contestability of (semi-)automated decisions
This meeting will provide attendees with an opportunity to learn about progress on XAI, to share their own perspectives, and to learn about potential approaches for solving key XAI research challenges. This should result in effective cross-fertilization among different disciplines that are shaping the XAI research area.