This workshop is held in FDG 2025, on April 15 in Vienna.
It focuses on advancing explainable AI (xAI) in the domain of computer game playing, a field where AI has achieved remarkable successes but often lacks transparency. Despite the impressive performance of AI agents in board and video games, general xAI techniques have only rarely been adopted in games. The workshop aims to bring together researchers to explore the usage of xAI in games and develop methods specifically designed for game playing. Among others, topics of interest include the application of existing xAI methods, the development of new xAI models, and the extraction of interpretable game strategies. The workshop will serve as an open forum for short paper presentations, demonstrations, and panel discussions, improving collaboration and innovation at the intersection of xAI and game research.
Historically, games have been a premier testing bed for artificial intelligence (AI) techniques. Chess has even been called the drosophila of AI, in the hope that the development of an intelligent chess agent will help us to permeate to the core of human intelligence.
Recent years have produced game playing agents of unprecedented strength, resulting in major breakthroughs and in many cases unexpected progress in computer game playing. While previously highly structured games, like Chess and Go, were used for AI research, recently also commercial games such as Doom, Starcraft, and Dota have found increasing applications.
Nevertheless, AI-based game playing agents have only provided limited insights into their behaviour. Even though books have been written about this new generation of game playing agents and their impact upon human game play, insights into the agents' playing strategies are hard to obtain, and essentially based on human guess work. As an example, chess endgame databases have been around for decades, providing perfect play in a limited set of chess positions, but laborious attempts to understand these databases by human chess grandmaster John Nunn have only resulted in a few general insights. Mostly, these databases proved useful for checking previous beliefs, and produced exemplary extraordinary positions but provided little overarching benefit.
In parallel, general techniques for explaining black-box AI models have been developed with success, in particular in image processing. Similarly, interest in directly learning interpretable models, e.g., in the form of decision rules, has recently renewed. However, notwithstanding a few exceptions, such techniques have not seen widespread use in game-playing AI yet, despite their high potential utility. As a result, the extraction of interpretable playing strategies from AI agents and the construction of AI agents that can provide interpretable justifications for their decisions is still in its infancy.
The goal of this workshop is to bring together researchers that work on the development of explainable AI (xAI) models in various aspects of computer and board game research. Thus, this workshop is interested in, but not limited to, the following:
Applications and evaluations of known xAI techniques to games.
Development of novel xAI techniques specifically for games.
Machine learning of intrinsically interpretable models of game play.
The impact and results of xAI to the games themselves.
We intend the workshop to be an open forum to exchange ideas. Thus, while we expect written short paper submissions with accommodating oral presentations to be the primary method, we invite submissions of all forms. Some examples of submission types include:
Short papers, which we aim to include in the proceedings.
Oral, poster, and spotlight presentations with accompanying discussion.
Demonstrations of working xAI in game systems.
Panel discussions of crucial aspects of xAI in games.
Long-form talks on the intersection of xAI and games.
Presentations can be done hybrid; all other types of submissions require in-person attendance. The workshop is intended to last half a day with breaks, but the exact duration is dependent on the number and types of submissions.
10:30 - 10:45: Introduction
10:45 - 11:30: Günter Wallner: "Explaining Play Behaviour through Visualisation" (invited talk)
11:30 - 12:00: Daniela De Angeli: "Creating an xAI Framework for Diverse Game Characters"
12:00 - 13:00: Lunch Break
13:00 - 14:00: Yngvi Björnsson: "Explainable AI in Chess" (invited talk, virtual)
14:00 - 14:30: Leon J. Tanis, Rafael Fernandes Cunha and Marco Zullich: "Bridging Faithfulness of explanations and Deep Reinforcement Learning: A Grad-CAM Analysis of Space Invaders"
The workshop on xAI in computer game playing is organised by researchers of the Institute for Application-Oriented Knowledge Processing, Institute of Machine Learning, and LIT AI Lab in Linz, Austria. The organisers combine expertise in learning interpretable models, explainable AI and Game AI, providing the necessary background for the workshop:
Johannes Fürnkranz
Timo Bertram
Florian Beck
Van Quoc Phuong Huynh
Short papers need to be formally submitted for peer review. Papers need to adhere to the following requirements:
4-10 pages
ACM SIGCONF single column
Submitted through EasyChair (workshop on xAI in Game Playing)
For other forms of activities, we expect a 1-2 page description of the intention and plan.
Paper submission deadline: 7 February 2025 21 February 2025
Author notification deadline: 10 March 2025
Camera-ready deadline: 24 March 2025
Workshop date: 15 April 2025