Call for papers
As AI becomes more ubiquitous, complex and consequential, the need for people to understand how decisions are made and to judge their correctness becomes increasingly crucial due to concerns of ethics and trust. The field of Explainable AI (XAI), aims to address this problem by designing AI whose decisions can be understood by humans.
This workshop brings together researchers working in explainable AI to share and learning about recent research, with the hope of fostering meaningful connections between researchers from diverse backgrounds, including but not limited to artificial intelligence, human-computer interaction, human factors, philosophy, cognitive & social psychology.
This meeting will provide attendees with an opportunity to learn about progress on XAI, to share their own perspectives, and to learn about potential approaches for solving key XAI research challenges. This should result in effective cross-fertilization among research on ML, AI more generally, intelligent user interaction (interfaces, dialogue), and cognitive modeling.
Topics of interest include but are not limited to:
Technologies and Theories
· Explainable Machine learning (e.g., deep, reinforcement, statistical, relational, transfer, case-based)
· Explainable Planning
· Human-agent explanation
· Human-behavioural evaluation for XAI
· Psychological and philosophical foundations of explanation
· Interaction design and XAI
· Historical perspectives of XAI
· Cognitive architectures
· Commonsense reasoning
· Decision making
· Episodic reasoning
· Intelligent agents (e.g., planning and acting, goal reasoning, multiagent architectures)
· Knowledge acquisition
· Narrative intelligence
· Temporal reasoning
· After action reporting
· Ambient intelligence
· Autonomous control
· Caption generation
· Computer games
· Explanatory dialog design and management
· Image processing (e.g., security/surveillance tasks)
· Information retrieval and reuse
· Intelligent decision aids
· Intelligent tutoring
· Legal reasoning
· Recommender systems
· User modeling
· Visual question-answering (VQA)
Paper submission: 19 May, 2019
Notification: 15 June 2019
Camera-ready submission: 10 July, 2019
Authors may submit *long papers* (6 pages plus up to one page of references) or *short papers* (4 pages plus up to one page of references).
All papers should be typeset in the IJCAI style (https://www.ijcai.org/authors_kit). Accepted papers will be published on the workshop website.
Papers must be submitted in PDF format via the EasyChair system (https://easychair.org/conferences/?conf=xai19).
Tim Miller (University of Melbourne, Australia): Primary contact: email@example.com
Rosina Weber (Drexel University)
David Aha (NRL, USA)
Daniele Magazzeni (King’s College London)