IJCAI 2023
Workshop on Explainable Artificial Intelligence (XAI)

Macao, S.A.R.: 21 August, 2023

Online: 31 August, 2023 (GMT) [in two parts]

https://ijcai-23.org/ 

Submission deadline:  extended to 13 May 2023, 11:59pm Anywhere on Earth

Important dates 

Paper submission:  extended to 13 May 2023, 11:59pm, timezone: Anywhere on Earth

Notification: 31 May, 2023

Camera-ready submission: extended to 16 July 2023, 11:59pm, timezone: Anywhere on Earth

In-person workshop: 21 August, 2023

Virtual event: 31 August, 2023 [in two parts]

Workshop overview

The XAI workshop this year will offer two modes: 

IJCAI has an in-person requirement for workshop attendees, but we have received approval to run a virtual session so that people who cannot attend the in-person event can still present, contribute ideas, and have discussions. This session will be open to all workshop attendees. We encourage those who attend IJCAI in person to also attend the virtual event. The virtual event will be held a week or so after IJCAI finishes, to give in-person attendees time to return home.

In-person workshop schedule (21 August, 2023)

Virtual workshop schedule (31 August, 2023)

We are excited to have Ming Yin as our keynote speaker!


Title: Human-Centered Evaluation of Explanations in AI-Assisted Decision-Making


Abstract: Explainable AI (XAI) methods have been increasingly used in AI-assisted decision-making scenarios to help human decision-makers make sense of the decision recommendations made by the AI models and better utilize them. However, do existing XAI methods serve their intended purposes, resulting in higher-quality human-AI interaction and better performance in human-AI joint decision making? Answering this question requires us to adopt human-centered perspectives and approaches to systematically evaluate these XAI methods. In this talk, I'll discuss a few human-subject studies that my group carries out, aiming at understanding how the presence of AI explanations in AI-assisted decision-making impacts decision-makers’ understanding and calibrated trust in the AI model, influences decision-makers’ decision fairness, and how changes in AI explanations due to model updates affect decision-makers’ trust in and satisfaction with the AI model. Our results indicate that the effects of XAI methods can be largely different on decision-making contexts that people have varying levels of domain expertise in, and sometimes the use of XAI methods may even lead to unintended negative consequences.


Bio: Ming Yin is an Assistant Professor in the Department of Computer Science, Purdue University. Her current research interests include human-AI interaction, crowdsourcing and human computation, and computational social sciences. She completed her Ph.D. in Computer Science at Harvard University and received her bachelor's degree from Tsinghua University. Ming was the Conference Co-Chair of AAAI HCOMP 2022. Her work was recognized with multiple best paper (CHI 2022, CSCW 2022, HCOMP 2020) and best paper honorable mention awards (CHI 2019, CHI 2016).




Virtual event details

The virtual part of the workshop will be held on 31 August, 2023 (GMT).

To accommodate various timezones, the workshop will be split into two sessions, with a 4-hour gap in between. Authors will be able to present in the timezone of their choosing. Attendees will be able to attend one or both parts of the workshop. 

Part 1 will run approximately 12am-4am (GMT), 31 August.

Part 2 will run approximately 8am-12pm (GMT), 31 August.

Proceedings

The proceedings of the 2023 edition of the IJCAI workshop on Explainable Artificial Intelligence (XAI) are organized in two sections. The first section covers the general track, which focuses on explainability in areas such as supervised and unsupervised machine learning, knowledge representation, and the social and philosophical aspects of explainability. The second section corresponds to the special track on explainable autonomous agents and focuses on systems that operate in the context of an environment, typically through a goal-driven sequence of decisions.

General track

"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of Counterfactual Explanation.

André Artelt and Barbara Hammer 


Automatic Concept Embedding Model (ACEM): No train-time concepts, No issue!

Rishabh Jain


Make Predictions Predictable: Fast Concept-based Counterfactual Explanations for Images.

Ruihan Zhang, Tim Miller, Krista Ehinger and Benjamin Rubinstein


Prompt-Based Editing for Controllable Text Style Transfer.

Guoqing Luo, Yu Tong Han, Lili Mou and Mauajama Firdaus


Receptive Field Reducer for Explaining Graph Neural Networks.

Anna Himmelhuber, Mitchell Joblin, Martin Ringsquandl and Thomas A. Runkler


Adversarial Attacks and Defenses in Explainable Artificial Intelligence: A Survey.

Hubert Baniecki and Przemyslaw Biecek


Is Last Layer Re-Training Truly Sufficient for Robustness to Spurious Correlations?

Phuong Quynh Le, Jörg Schlötterer and Christin Seifert


Comparison of Supervised and Unsupervised Concepts in Concept-based Interpretable Models.

Ruihan Zhang, Tim Miller, Kris Ehinger and Benjamin Rubinstein


Multi-Objective Decision-Making: Understanding the Users’ Explainability Needs.

Zuzanna Osika, Jazmin Zatarain-Salazar and Pradeep K. Murukannaiah


If Only...If Only...If Only...We Could Explain Everything: A User Study of Group-Counterfactuals to Explain Multiple Instances. 

Greta Warren, Mark Keane, Christophe Guéret and Eoin Delaney


Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model.

Mohsen Abbaspour Onari, Isel Grau, Marco S. Nobile and Yingqian Zhang


Seeking Interpretability and Explicability in Binary Activated Neural Networks.

Benjamin Leblanc and Pascal Germain


What's meant by explainable model: A Scoping Review.

Mallika Mainali and Rosina Weber


Selecting Feature Changes for Counterfactual Explanation: A Class-to-Class Approach.

Xiaomeng Ye, David Leake, Yu Wang, Ziwei Zhao and David Crandall


Evaluating the overall sensitivity of saliency-based explanation methods.

Harshinee Sriram and Cristina Conati 


Science Communications for Explainable Artificial Intelligence.

Simon Hudson and Matija Franklin


Counterfactual Explanation via Search in Gaussian Mixture Distributed Latent Space.

Xuan Zhao, Klaus Broelemann and Gjergji Kasneci



Special track: Explainable autonomous agents

Experiential Explanations for Reinforcement Learning.

Amal Alabdulkarim, Gennie Mansi, Kaely Hall and Mark Riedl 


A Mental Model Based Theory of Trust.

Zahra Zahedi, Sarath Sreedharan and Subbarao Kambhampati


eXplainable AI (XAI): Its a Conversation Not an Ultimatum !

Mark Keane


Causal Explanations for Sequential Decision-Making in Multi-Agent Systems.

Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen and Stefano V. Albrecht


Chess and Explainable AI.

Yngvi Björnsson


DR-HAI: Argumentation-based Dialectical Reconciliation in Human-AI Interactions.

Stylianos Loukas Vasileiou, Ashwin Kumar, William Yeoh, Tran Cao Son and Francesca Toni


Finding Uncommon Ground: A Human-Centered Model for Extrospective Explanations.

Laura Spillner, Nima Zargham, Mihai Pomarlan, Robert Porzel and Rainer Malaka


CODEX: A Cluster-Based Method for Explainable Reinforcement Learning.

Timothy Mathes, Jessica Inman, Andrés Colón and Simon Khan


Submission Details

Authors may submit long papers (7 pages plus unlimited pages of references) or short papers (4 pages plus unlimited page of references). 

All papers should be typeset in the IJCAI style (https://www.ijcai.org/authors_kit). Accepted papers will be made available on the workshop website. 

Supplementary material can be added as an appendix at the end of the main PDF file; that is, just submit a single PDF file with the main body of the paper, with an appendix that is not included in the page count. Reviewers will not be required to read the supplementary material, so ensure the body of the paper is self contained.

Accepted papers will not be published in archival proceedings. This means that you can submit your paper to another venue after the workshop.

Reviews are double blind, so no identifying information should be on the papers.

Authors can submit papers at the XAI2023 Easychair site: https://easychair.org/conferences/?conf=xaiijcai23

News!

17 March: Great news! The XAI workshop has been accepted at IJCAI for 2023!

12 May: We received 31 submissions to the workshop! Thanks for all the authors that submitted. Now, on to the reviews.

Call for papers

The Explainable AI (XAI) workshop is interested in providing a forum for discussing recent research on XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI, human-computer interaction, and cognitive theories of explainability and transparency. This topic is of particular importance but not limited to machine learning, AI planning, and knowledge reasoning & representation.

Explainable Artificial Intelligence (XAI) addresses the challenge of how to interact with people to help them understand models used in AI systems and their specific decisions. The need for explainable models increases as AI systems are deployed in critical applications.

The need for interpretable models exists independently of how the models were acquired (i.e., perhaps they were hand-crafted, or interactively elicited without using ML techniques). This raises several questions, such as: how should explainable models be designed? What queries should AI systems be able to answer about their models and decisions? How should user interfaces communicate decision making and help understanding? What types of user interactions should be supported? And how should explainability be evaluated?

In addition to encouraging descriptions of original or recent contributions to XAI (i.e., theory, simulation studies, subject studies, demonstrations, applications), we will welcome contributions that: survey related work; perform large-scale empirical studies; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them.

The call for papers is divided into a general track and one special track: explainable autonomous agents.


Targeted Participants and Topic Areas

XAI may interest researchers studying the topics listed below (among others). We are particularly interested in papers that draw out cross-disciplinary problems and solutions to explainability.

Special track: Explainable autonomous agents

The intended focus of the track is on explainable autonomous agents - systems that operate in the context of an environment, typically through a goal-driven sequence of decisions. This stands in contrast to the substantial existing work on interpretable machine learning, which generally focuses on the single input-output mappings of "black box" models such as neural networks. While such ML models are an important tool, intelligent behavior extends over time and needs to be explained and understood as such. Explainable Agents for example encompasses topics such as:

General track

The general track will focus on research that addresses problems of explainability in areas such as supervised and unsupervised machine learning, knowledge representation, and the social and philosophical aspects of explainability. As AI models are increasingly being deployed in real-world settings involving high-stakes decisions, the need to understand their decision-making grows. Motivations range from enhancing trust in human-AI collaboration to legal accountability. While a variety of methods for explaining AI models have been introduced, there are still many gaps in our ability to provide explainability that support users. Topics in this track include:

Overall, we expect that this meeting will provide attendees with an opportunity to learn about progress on XAI, to share their own perspectives, and to learn about potential approaches for solving key XAI research challenges. This should result in effective cross-fertilization among different disciplines that are shaping the XAI research area.


Workshop organisers

Workshop chairs: Ofra Amir, Tim Miller, Hendrik Baier

Roundtable chair: Rosina O. Weber

Industry and applications chair: Daniele Magazzeni

Explainable autonomous agents track chairs: Hendrik Baier, Sarath Sreedharan, Silvia Tulli, Abhinav Verma 

General track chairs: Tobias Huber, Tim Miller, Ofra Amir

Contact: Tim Miller (The University of Melbourne, Australia) tmiller@unimelb.edu.au 

Program Committee

Thanks to our program committee, who make it possible for this workshop to happen!

Mark Hall, Airbus

Rebecca Eifler, Saarland University

Silvan Mertes, Augsburg University

Greta Warren, University College Dublin

Vera Liao, Microsoft Research

Ashwin Kumar, Washington University in St Louis

Yotam Amitai, Technion

Abhishek Dubey, Vanderbilt University

Songtuan Lin, The Australian National University

Benjamin Krarup, King's College London

David Leake, Indiana University

Barry O'Sullivan, University College Cork, Ireland

Belen Diaz-Agudo, Universidad Complutense de Madrid

Zahra,Zahedi Arizona State University

Krysia Broda, Imperial College

Rebekah Wegener, Salzburg University

Sanjay Kariyappa 

Mohan Sridhara, University of Birmingham

Joerg Hoffmann, Saarland University

David Martens, University of Antwerp

Eoin Delaney, University College Dublin

Stylianos Loukas Vasileiou, Washington University in St Louis

Fabio Mercorio, University of Milano Bicocca

Sriram Gopalakrishnan, JP Morgan Chase

Kacper Sokol, RMIT University

Saumitra Mishra, J P Morgan

Michael Floyd, Knexus Research

Zana Bucinca, Harvard University

Abeer Alshehri, University of Melbourne

Kary Främling, Umeå university

Isaac Lage, Harvard University

Denise Agosto, Drexel University

Jörg Cassens, University of Hildesheim

Sachin Grover, "Parc, a Xerox Company"

Serena Booth, Massachusetts Institute of Technology

Ramon Fraga Pereira, University of Manchester

Cristina Conati, The University of British Columbia

Maarten de Rijke, University of Amsterdam

Emma Baillie, The University of Melbourne

Mark Keane, UCD Dublin

Senka Krivic, University of Sarajevo

Christin Seifert, University of Marburg

Liz Sonenberg, University of Melbourne

Eoin Kenny, MIT

Sanghamitra Dutta, University of Maryland College Park

Giovanni Ciatto, University of Bologna

Ruihan Zhang, University of Melbourne

Meiyi Ma, Vanderbilt University

Aaquib Tabrez, University of Colorado Boulder

Freddy Lecue, CortAIx Thales

Ian Watson, University of Auckland

Gerard Canal, King's College London

Mudit Verma, Arizona State University