A CP/SAT 2025 workshop - Sunday 10/08/2025 - Glasgow, Scotland
The main goal of the ExCoS workshop is to provide a platform for discussing recent advances and exploring future research directions in the intersection of constraint solving and explainability.
This includes developing techniques for explaining the outcome of a solver (e.g. unsatisfiability, optimality), techniques for making the constraint solving process more explainable, as well as ways of using constraint solvers to compute explanations in various application settings, such as in machine learning and planning.
The workshop is specifically targeted to both SAT and CP and is organized as an in-person event on the first CP/SAT 2025 workshop day, August 10, 2025.
Submission deadline: June 09, 2025 Submissions closed
Notification: by June 20, 2025 See program below
Workshop date: August 10, 2025
Note: Each talk is allocated (up to) 18 minutes + (at least) 4 minutes for discussion.
10:50 -- 11:00 Welcome, the organizers
11:00 -- 11:22 Proving and Explaining the Equivalence of Constraint Formulations, Pablo Manrique, Stefan Szeider
11:22 -- 11:45 Explaining SAT Solving Using Causal Reasoning, Jiong Yang, Arijit Shaw, Teodora Baluta, Mate Soos, Kuldeep S. Meel
11:45 -- 12:07 Using Certifying Constraint Solvers for Generating Step-Wise Explanations, Ignace Bleukx, Maarten Flippo, Emir Demirovic, Bart Bogaerts, Tias Guns
12:07 -- 12:30 Counterfactual Explanations for Unsatisfiable Producer/Consumer Problems, Sharmi Dev Gupta, Helmut Simonis, Luis Quesada, Barry O'Sullivan
12:30 -- 13:30 Lunch break
13:30 -- 13:52 Creating Pen and Paper Puzzles with a Unique Solution, Christopher Jefferson
13:52 -- 14:15 Understanding How Players Solve Puzzles, Thomas Lofaro, Ruth Hoffmann, Miriam Sturdee, Christopher Jefferson, Alice M Lynch
14:15 -- 14:37 Preference Elicitation for Explanations in Logic Puzzles, Marco Foschini, Marianne Defresne, Emilio Gamba, Bart Bogaerts, Tias Guns
14:37 -- 15:00 Explanations for Planning via CSP: Application to a Concrete Use Case, Nika Beriachvili, Rebecca Eifler, Arthur Bit-Monnot
15:00 -- 15:30 Coffee break
15:30 -- 15:52 A Framework for Data-driven Explainability in Mathematical Optimization, Michael Hartisch
15:52 -- 16:15 Explanations for Hierarchical Neuro-Symbolic AI, Sushmita Paul, Jinqiang Yu, Jip J. Dekker, Maria Garcia de la Banda, Peter J. Stuckey, Alexey Ignatiev
16:15 -- 16:37 Explaining Mismatches in Expected versus Perceived Behavior for Interacting Digital Systems, Mohammad Alkhiyami, Gianluca Martino, Goerschwin Fey
16:37 -- 17:00 (cancelled) Explainable Rational Synthesis in Multi-Agent Systems, Alireza Farhadi, Mohammad Izadi, Jafar Habibi, Tobias Meggendorfer
16:37 -- 17:00 Open Discussion session: observations, common challenges, new collaborations?
11:00 -- 12:30
Proving and Explaining the Equivalence of Constraint Formulations, Pablo Manrique, Stefan Szeider
In this work, we formalize the Constraint Satisfaction Problem in Lean 4, focusing on equivalence and equisatisfiability. Our definitions and results can be applied to prove properties of parametrized CSPs, such as the validity of symmetry-breaking constraints, thereby improving explainability through the readability of Lean proofs.
Explaining SAT Solving Using Causal Reasoning, Jiong Yang, Arijit Shaw, Teodora Baluta, Mate Soos, Kuldeep S. Meel
The past three decades have witnessed notable success in designing efficient SAT solvers, with modern solvers capable of solving industrial benchmarks containing millions of variables in just a few seconds. The success of modern SAT solvers owes to the widely-used CDCL algorithm, which lacks comprehensive theoretical investigation. Furthermore, it has been observed that CDCL solvers still struggle to deal with specific classes of benchmarks comprising only hundreds of variables, which contrasts with their widespread use in real-world applications. Consequently, there is an urgent need to uncover the inner workings of these seemingly weak yet powerful black boxes.
In this work, we present a first step towards this goal by introducing an approach called CausalSAT, which employs causal reasoning to gain insights into the functioning of modern SAT solvers. CausalSAT initially generates observational data from the execution of SAT solvers and learns a structured graph representing the causal relationships between the components of a SAT solver. Subsequently, given a query such as whether a clause with low literals blocks distance (LBD) has a higher clause utility, CausalSAT calculates the causal effect of LBD on clause utility and provides an answer to the question. We use CausalSAT to quantitatively verify hypotheses previously regarded as “rules of thumb” or empirical findings, such as the query above or the notion that clauses with high LBD experience a rapid drop in utility over time. Moreover, CausalSAT can address previously unexplored questions, like which branching heuristic leads to greater clause utility in order to study the relationship between branching and clause management. Experimental evaluations using practical benchmarks demonstrate that CausalSAT effectively fits the data, verifies four “rules of thumb”, and provides answers to three questions closely related to implementing modern solvers.
Using Certifying Constraint Solvers for Generating Step-Wise Explanations, Ignace Bleukx, Maarten Flippo, Emir Demirovic, Bart Bogaerts, Tias Guns
The ultimate goal of declarative problem solving is that users should be able to specify a problem declaratively (meaning they only need to specify what their problem is, not how to solve it), and can then simply apply a generic tool (often called a solver) to find solutions to their problem. This idea, which is very prominent in Constraint Programming (CP), has also been called the holy grail of computer science. Over the last couple of decades, solvers in various domains, such as CP, Answer Set Programming, Mixed Integer Programming and Boolean Satisfiability (SAT) have become increasingly powerful, allowing them to solve complex, real-world problems often involving thousands of variables and constraints. However, due to the complexity of the problems solved by modern solvers, explaining the solution to a human is still a more difficult challenge.
Counterfactual Explanations for Unsatisfiable Producer/Consumer Problems, Sharmi Dev Gupta, Helmut Simonis, Luis Quesada, Barry O'Sullivan
In this paper, we build on our earlier approach introduced in [3], where we identified the maximal relaxation and counterfactual explanations for infeasible RCSP problems. We use that approach for finding counterfactual explanations for infeasible producer/consumer problems. This application would provide an interactive, user centric approach to first identify the exact causes for infeasibility and then providing the user with a single feasible solution closest to their original constraints.
[3] Sharmi Dev Gupta, Barry O’Sullivan, and Luis Quesada. Counterfactual explanation through constraint relaxation. In 2024 IEEE 36th International Conference on Tools with Artificial Intelligence (ICTAI), pages 396–403, 2024. doi:10.1109/ICTAI62512.2024.00064.
13:00 -- 14:30
Creating Pen and Paper Puzzles with a Unique Solution, Christopher Jefferson
The generation of progressive pen and paper puzzles (PPPP), such as Sudoku or Star Battle, presents a significant challenge, particularly in ensuring that each PPPP has a unique solution and a desired level of difficulty. While ad-hoc generators exist for popular PPPPs, they are not generalisable to more complex structures. In this paper, we present a general, work-in-progress framework for creating PPPP instances that requires only a single high-level constraint specification of a PPPP. Our approach constructs a comprehensive SAT ‘uber-model’ that encodes the PPPP’s instance variables (the setup), user variables (the solution), and all associated constraints. This unified model allows us to use Large Neighbourhood Search (LNS) to explore the vast space of possible PPPP instances. The search is guided by a quality function that prioritizes instances with a unique solution and a specific difficulty profile. We measure difficulty by analysing the Minimal Unsatisfiable Sets (MUSes) required for a step-by-step logical deduction of the solution. Our current system has been used to generate instances of a variety of different PPPPs from a logical description of the PPPP.
Understanding How Players Solve Puzzles, Thomas Lofaro, Ruth Hoffmann, Miriam Sturdee, Christopher Jefferson, Alice M Lynch
Pen-and-paper puzzles offer a controlled environment to study problem-solving, player behaviour, and engagement. This abstract describes how players interact with unfamiliar puzzles, building upon existing research in the puzzle solving, explanation and hint-systems. We investigated how hints help players in explaining and/or indicating next steps in solving puzzles for a particular puzzle (Binairo). The analysis found that players using hints generally performed better, completed more puzzles, and showed signs of strategic thinking. This suggests that the availability of hints may have implications for playability, engagement and facilitating mental focus.
Preference Elicitation for Explanations in Logic Puzzles, Marco Foschini, Marianne Defresne, Emilio Gamba, Bart Bogaerts, Tias Guns
Step-wise explanation sequences are typically employed to explain how to derive variable assignments. However, multiple explanations may exist for the same assignment, and users often have preferences among them. Therefore, we propose capturing these preferences by extending the Constructive Preference Elicitation framework — an interactive approach designed for multi-objective problems — to the context of explanation generation.
Explanations for Planning via CSP: Application to a Concrete Use Case, Nika Beriachvili, Rebecca Eifler, Arthur Bit-Monnot
We showcase a real-world industrial logistics planning use case within Airbus, dealing with the storage and management of cargo transported by successive Beluga aircrafts to and from an aircraft assembly site. The cargo consists of jigs – special devices than can hold aircraft parts – that must be unloaded/loaded from/to Beluga aircrafts in predefined order, as well as delivered to production lines in predefined order. We address it by computing plans and providing decision support via conflict-based explanations (MUSes) over goals and preferences described by the human planners at Airbus. To compute these, we apply a MUS/MCS enumeration algorithm on top of the constraint encoding of planning problems used by the hybrid CP/SAT-based lifted plan-space planner Aries. We then suggest how these high-level explanations can be expanded into more detailed, multi-level explanations involving lower-level constraints of the problem.
15:30 -- 17:00
A Framework for Data-driven Explainability in Mathematical Optimization, Michael Hartisch
We propose a data-driven framework that explicitly incorporates explainability into mathematical optimization models. Rather than relying solely on provable optimality, our approach treats explainability—measured by similarity to historically accepted solutions—as a second, competing criterion, yielding a bicriteria optimization model. To demonstrate practical applicability, we focused on the shortest-path problem. Numerical experiments on both synthetic grids and a real-world road network show that only a small trade-off is required to significantly improve explainability, making our framework a viable option whenever users demand understandable, trustworthy solutions.
Explanations for Hierarchical Neuro-Symbolic AI, Sushmita Paul, Jinqiang Yu, Jip J. Dekker, Maria Garcia de la Banda, Peter J. Stuckey, Alexey Ignatiev
The success of AI makes it critical to understand its behaviour, leading to the development of explainable AI (XAI). This imperative extends to Neuro-Symbolic (NeSy) systems, which merge neural perception with symbolic reasoning to decrease bias and enhance performance quality. Interestingly, explaining entire NeSy systems challenges both heuristic model-agnostic and formal model-based XAI methods. The former are known to suffer from explanation quality issues, while the latter lack scalability. We propose an efficient explanation strategy for hierarchical NeSy systems that can be used with any type of explainer method for the neural components. It first derives a succinct formal explanation for the symbolic component, which is then used to identify a minimal subset of neural information to be explained. This helps mitigate the quality issues of heuristic explainers by narrowing the focus on the neural parts. It also simplifies the computational effort of the formal explainers, thus increasing their scalability. As a result, our hierarchical strategy facilitates more succinct explanations and improves the performance of diverse XAI techniques. Experimental results on complex reasoning tasks show its ability to enhance the quality and efficiency of NeSy explanations.
Explaining Mismatches in Expected versus Perceived Behavior for Interacting Digital Systems, Mohammad Alkhiyami, Gianluca Martino, Goerschwin Fey
Our work investigates how inconsistent behavior arises in interacting digital systems using a case study and proposes a methodology that enables these systems to explain inconsistencies when their perceived behavior deviates from the expected behavior. We introduce the conceptual framework through an example and suggest an algorithm that can be used to explain mismatches in interacting digital systems’ behavior. This work introduces a formal, model-based explanation mechanism for mismatches in digital systems. The algorithm produces concise, counterfactual, and causal explanations
based on model-checking techniques.
Explainable Rational Synthesis in Multi-Agent Systems, Alireza Farhadi, Mohammad Izadi, Jafar Habibi, Tobias Meggendorfer
The synthesis of interaction protocols for multi-agent systems faces dual challenges: the double-exponential computational complexity of linear temporal logic rational synthesis, and the black-box nature of the resulting strategies, which hinders comprehension and trust. This paper introduces a novel, performant algorithm that addresses both issues. Our method improves performance by first solving a series of parity games via a suspect game construction to guarantee punishment for deviations, and then applying SAT-based bounded model checking to extract a minimal equilibrium path. Our implemented tool, CGES, demonstrates significant performance gains over state-of-the-art methods. Concurrently, to tackle the explainability challenge, we use a port automata-based connector framework for transparently modeling strategies and propose a novel algorithm for interactive, contrastive why-not questioning. Our work thus contributes a practical and understandable solution to rational synthesis.
explainable constraint solving
minimal and optimal unsatisfiable subsets
step-wise explanations
explanations for optimality
explainable proofs
explanations with global constraints
formal explanations for machine learning
explainable planning
model reconciliation and feasibility restoration
interactive explanation systems
counterfactual reasoning in constraint solving
visual explanations
debugging and trace analysis with explanations
evaluation metrics for explainability
user-centered explanation interfaces
and other related topics.
The workshop will consist of a number of talks, including time for discussion to enable a true workshop-style atmosphere. A longer invited talk or panel may be organized as well.
Talks are solicited through an open call for papers and selected by the workshop organizers based on suitability to the workshop theme.
Talks will be selected based on extended abstracts of at most 2 pages (incl references) in pdf, including a starting one-paragraph summary. For the extended abstract the authors may use e.g. the CP or SAT submission templates. Presentations on already-published papers, works in progress, position papers and system descriptions/demos are all invited.
Authors can optionally add up to 10 pages of technical report as an appendix to the extended abstract. For already-published papers, a pointer to a published version of the work should be provided.
All accepted talks will have their title and the one-paragraph summary published on this page. Authors will be able to choose whether they want the extended abstract and the slides also made available on this page. Talks are presented in person in Glasgow. All presenters and attendees are expected to register to the CP/SAT workshop day.
Submit your extended abstract via email to excos25@stes.fi with the subject title "ExCoS25 submission".
For any questions related to the workshop, please use excos25@stes.fi to reach the organizers.
Bart Bogaerts (KU Leuven, Belgium), https://bartbogaerts.eu/
Tias Guns (KU Leuven, Belgium), https://people.cs.kuleuven.be/~tias.guns/
Matti Järvisalo (University of Helsinki, Finland) https://www.cs.helsinki.fi/u/mjarvisa/
Jussi Rintanen (Aalto University, Finland) https://users.aalto.fi/~rintanj1/