Important Dates
Paper submission: June 15, 2024 (Anywhere on Earth)
Review transfer option for papers rejected from main conference: July 11
Notification: July 18
Camera-ready submission: August 8
Workshop: 20 October 2024
Call for Papers
An increasing number of AI decision-making methods are applied to real-world problems from diverse domains, such as logistics, manufacturing, and resource allocation, among others. Ensuring the trustworthiness of both sequential decision-making (SDM) methods (e.g. reinforcement learning, planning...) as well as optimization approaches (e.g. mathematical programming, meta-heuristics...) is therefore critical.
A variety of research areas are concerned with aspects of AI trustworthiness: this includes explainability and interpretability, which are key for establishing accountability, improving human-AI collaboration, and addressing ethical concerns around autonomous systems. It includes robustness to distributional shift and non-stationarity, as real-world environments can evolve over time in unseen ways. Trustworthy systems should also be sustainable, for example, by being transparent in the computational costs of their training and achieving their benefits with minimal energy consumption. Furthermore, fairness is critical in real-world decision-making concerning the reasonable trade-off between multiple criteria and multiple agents. These and other research directions aim at increasing trust and acceptance of SDM and optimization systems by going beyond pure performance maximisation.
The purpose of this workshop is to promote collaboration and cross-fertilisation of ideas between researchers working in different areas of trustworthy sequential decision-making and optimization. The workshop aims to provide a forum for dissemination of high-quality research on aspects of trustworthiness in SDM and optimization, facilitating the development of the respective research communities.
The following is a non-exhaustive list of topics that we would like to cover in the workshop:
Explainable reinforcement learning
Explainable planning, search or combinatorial optimization
Explainability in multi-agent systems
Visualizations or (LLM-assisted) dialogue systems for explainable decision-making
Robust or safe reinforcement learning
Robustness in stochastic SDM or optimization
Uncertainty quantification, out-of-distribution detection or handling concept drift in SDM or optimization
Fairness in reinforcement learning or multi-agent systems
Fairness in SDM or optimization
Privacy in SDM or optimization
Sustainable reinforcement learning
Sustainable, energy-aware or surrogate-based SDM or optimization
Sustainable hyper-parameter tuning or automated machine learning
Validating, measuring or evaluating trustworthiness for SDM or optimization
Multi-objective trustworthy reinforcement learning
Multi-objective trustworthy optimization
TSDO 2024 will welcome submission of original work, including preliminary results or work-in-progress (i.e., theory, simulation studies, subject studies, demonstrations, applications), as well as contributions that survey related work; describe key issues that require further research; or highlight relevant challenges of interest to the AI community and plans for addressing them. Finally, the workshop also aims to be a platform for published journal/conference paper abstracts.
Submission Details
Authors may submit long papers (up to 7 pages plus unlimited references) or extended abstracts (2 pages plus unlimited references). We will also consider relevant papers which were rejected from the main conference based on review transfer; to use this option, authors are invited to submit a request after rejection. We will then make a decision based on the reviews written for the main conference, which will be made available to us in anonymized form. Please note the relevant deadlines at the top of this page.
All papers should be typeset in the ECAI style (ECAI LaTeX Template). Accepted papers will be made available on the workshop website.
Supplementary material can be added as an appendix at the end of the main PDF file; that is, just submit a single PDF file with the main body of the paper, with an appendix that is not included in the page count. Reviewers will not be required to read the supplementary material, so ensure the body of the paper is self contained.
Accepted papers will not be published in archival proceedings. This means that you can submit your paper to another venue after the workshop. However, we aim at editing a special issue on the topic of the workshop, giving an opportunity for selected papers to be published in an extended version.
Reviews are double blind, so no identifying information should be on the papers. The reviewing criteria will be the soundness of the scientific approach, the novelty of the work, and its fit with the scope of the workshop, while explicitly welcoming preliminary results and work-in-progress.
Submission link: https://cmt3.research.microsoft.com/TSDO2024
Workshop Schedule
Session 1 (9:00-10:30)
9:00 Welcome & introduction
9:10 - 9:45 Keynote talk: Axel Abels, Enhancing Collective Intelligence through Learned Aggregation
9:45-10:30 UDUC: An Uncertainty-driven Approach for Learning-based Robust Control
Shielded FOND: Planning With Safety Constraints in Pure-Past Linear Temporal Logic
Safety Verification of Tree-Ensemble Policies via Predicate Abstraction
Coffee break & networking (10:30-11:00)
Session 2 (11:00-12:40)
11:00-12:30 Enhancing Agent Interpretability with Time-Agnostic Clustering in Reinforcement Learning
Explaining an Agent’s Future Beliefs through Temporally Decomposing Future Reward Estimators
X-Vent: ICU Ventilation with Explainable Model-Based Reinforcement Learning
A Meta-Learning Approach for Multi-Objective Reinforcement Learning in Sustainable Home Energy Management
Enabling MCTS Explainability for Sequential Planning Through Computation Tree Logic
Structure and Reduction of MCTS for Explainable-AI
12:30-12:40 Closing remarks
Keynote Speaker: Axel Abels
Title: Enhancing Collective Intelligence through Learned Aggregation
Abstract:
Human decision-making is inherently flawed by individual and social biases that distort our perception of truth, often leading to sub-optimal group decisions. We propose an alternative to consensus decision-making: aggregating independent judgments to minimize the effects of biases such as groupthink and herding. The key hypothesis is that by carefully optimizing the collectivization of knowledge, it will be substantially harder for humans to impose their biases on the final decision. The core of our work therefore involves the development and analysis of algorithms designed to effectively aggregate diverse sources of expertise. We focus on transparent aggregation methods that use online machine/reinforcement learning to take into account the nuances of individual expertise and the impact of biases, aiming to filter out noise and enhance the reliability of collective decision-making under uncertainty. Our findings demonstrate a marked improvement in decision-making accuracy and a reduction in bias, underscoring the potential of technology-assisted methods in fostering more effective collective intelligence.
Bio:
Axel Abels (M) is currently a doctoral researcher, affiliated to the Vrije Universiteit Brussels and Université Libre de Bruxelles. His early work extended the applicability of Deep Reinforcement Learning to Multi-Objective problems. His current research focus is on collective decision-making, and more specifically, how a system of artificial agents can groups of human decision-makers to promote the emergence of collective intelligence.
His research interests involve Reinforcement Learning in general, with a focus on collective intelligence, group decision systems, multi-agent systems and multi-objective reinforcement learning.
We are looking forward to your talk!
Workshop Organizers
Hendrik Baier (h.j.s.baier@tue.nl)
Laurens Bliek
Zaharah Bukhsh
Isel Grau
Yaoxin Wu
Yingqian Zhang