Registration 8. a.m.–9.15 a.m.
9.30 –10.20
Synchronous Distributed Games
In synchronous distributed games a team of players with partial information plays against an environment in synchronous rounds. The game arena is distributed on multiple locations — each location has a unique writer (either the environment or a player from the team) and may have multiple readers. A player can read only from a subset of locations which determines their partial view of the game. The static distributed architecture specifies the writer and readers of each location. The objective of the players is to maintain the sequence of nodes visited in the game arena along an infinite play, within a specified regular set of winning sequences.
Deciding whether the team of players has a distributed winning strategy in this model of games is undecidable as soon as there are two players with incomparable partial information. It becomes decidable otherwise and is non-elementary complete. The talk will explain these concepts and some of these results.
The talk will be based on our chapter “Synchronous Distributed Games” with Paul Gastin and Nathalie Sznajder to appear in the book “Games on Graphs” by Fijalkow et. al.
Coffee Break ☕ 10:20-10:50
10:50 - 11:45
Short talks: Distributed Systems and Logic
10:50 - 11:03
Distributed Games for Two Processes
We investigate a new model of a distributed game played on a non-deterministic asynchronous transition system over two processes. This game is played between an environment and a distributed team of the two processes where each process has only partial information of the ongoing play -- namely complete information upto the last synchronization and only its own local evolution since this last synchronization.The key algorithmic decision problem, for a given winning objective, is the existence of a distributed co-operative winning strategy for the team to meet that objective. We address this question for a variety of distributed winning objectives such as global safety, local reachability, global/simultaneuous reachability and local parity winning conditions.
We carry out a thorough analysis of these games and present natural fixed-point based algorithms for solving them which not only provide near optimal decision procedures but also construct optimal distributed winning strategies in terms of the required amount of distributed finite-memory. Specifically, our analysis shows that the decision problems for global safety and local reachability objectives are NP-complete. We also establish that the decision problem for global reachability objective is PSPACE-hard and provide an NEXPTIME algorithm for the same. We also analyze local parity objectives and provide a novel reduction to standard Streett games.
11:04 - 11:17
A Zielonka Theorem for Fair Languages
Asynchronous automata are a model of distributed finite state processes synchronising on shared actions. A celebrated result by Zielonka shows how a deterministic asynchronous automaton can be synthesised, starting from a DFA specification and a distribution of the alphabet into local alphabets for each process.
The translation is particularly complex and has been revisited several times.
In this work, we revisit this construction on a restricted class of “fair” specifications. A DFA represents a fair specification if, in every loop, all processes participate in at least one action — so, no process is “starved”. For fair specifications, we present a new construction to synthesize deterministic asynchronous automata. Our construction is conceptually simpler and results in an asynchronous automaton that is singly exponential in the number of states of the DFA and linear in the number of processes.
11:18 - 11:31
A Local-Time Semantics for Negotiations
Negotiations, introduced by Esparza et al., are a model for concurrent systems where computations involving a set of agents are described in terms of their interactions. In many situations, it is natural to impose timing constraints between interactions — for instance, to limit the time available to enter the PIN after inserting a card into an ATM. To model this, we introduce a real-time aspect to negotiations. In our model of local-timed negotiations, agents have local reference times that evolve independently. Inspired by the model of networks of timed automata, each agent is equipped with a set of local clocks. Similar to timed automata, the outcomes of a negotiation contain guards and resets over the local clocks. As a new feature, we allow some interactions to force the reference clocks of the participating agents to synchronize. This synchronization constraint allows us to model interesting scenarios. Surprisingly, it also gives unlimited computing power. The location reachability problem is undecidable for local-timed negotiations with a mixture of synchronized and unsynchronized interactions. In the end, I'll talk about a few subclasses for which the location reachability problem becomes decidable.
This is a joint work with Madhavan Mukund and B Srivathsan
11:32 - 11:45
Constraint linear-time temporal logic (CLTL) is an extension of LTL that is interpreted on sequences of valuations of variables over an infinite domain. The atomic formulas are interpreted as constraints on the valuations. The atomic formulas can constrain valuations over a range of positions along a sequence, with the range being bounded by a parameter depending on the formula. The satisfiability and model checking problems for CLTL have been studied by Demri and D’Souza. We consider the realizability problem for CLTL. The set of variables is partitioned into two parts, with each part controlled by a player. Players take turns to choose valuations for their variables, generating a sequence of valuations. The winning condition is specified by a CLTL formula---the first player wins if the sequence of valuations satisfies the specified formula. We study the decidability of checking whether the first player has a winning strategy in the realizability game for a given CLTL formula. We prove that it is decidable in the case where the domain satisfies the completion property, a property introduced by Balbiani and Condotta in the context of satisfiability. We prove that it is undecidable over $(\Int,<,=)$, the domain of integers with order and equality. We prove that over $(\Int,<,=)$ and $(\Nat,<,=)$, it is decidable if the atomic constraints in the formula can only constrain the current valuations of variables belonging to the second player, but there are no such restrictions for the variables belonging to the first player. We call this single-sided games.
11:55 –12:45
Invariants for One-Counter Automata with Disequality Tests
We study the reachability problem for one-counter automata in which transitions can carry disequality tests. A disequality test is a guard that prohibits a specified counter value. This reachability problem has been known to be NP-hard and in PSPACE, and characterising its computational complexity has been left as a challenging open question by Almagor, Cohen, Pérez, Shirmohammadi, and Worrell (2020). We reduce the complexity gap, placing the problem into the second level of the polynomial hierarchy, namely into the class coNP^NP. In the presence of both equality and disequality tests, our upper bound is at the third level, P^NP^NP.
To prove this result, we show that non-reachability can be witnessed by a pair of invariants (forward and backward). These invariants are almost inductive. They aim to over-approximate only a "core" of the reachability set instead of the entire set. The invariants are also leaky: it is possible to escape the set. We complement this with separate checks as the leaks can only occur in a controlled way.
Joint work with Jérôme Leroux, Henry Sinclair-Banks, and Nicolas Waldburger.
Lunch 🍽️ 12:45 - 14:00
14:00–14:50
Robust Timed Synthesis
Solving games played on timed automata is a well-known problem and has led to tools and industrial case studies. In these games, the first player (Controller) chooses delays and actions and the second player (Perturbator) resolves the non-determinism of actions. However, the model of timed automata suffers from mathematical idealizations such as infinite precision of clocks and instantaneous synchronization of actions. To address this issue, we extend the theory of timed games in two directions. First, we study the synthesis of robust strategies for Controller which should be tolerant to adversarially chosen clock imprecisions. Second, we address the case of a stochastic perturbation model where both clock imprecisions and the non-determinism are resolved randomly. Finally we present a notion "repair" where we explain how to recover robustness in non-robust systems.
14:55 - 15:36
Short talks: Timed Systems and Robustness
14:55 - 15:08
Model-checking Real-time Systems: Revisiting the Alternating Automaton Route
Alternating timed automata (ATA) are an extension of timed automata, that are closed under complementation and hence amenable to logic-to-automata translations. Several timed logics, including Metric Temporal Logic (MTL), can be converted to equivalent 1-clock ATAs (1-ATAs). Satisfiability of an MTL formula therefore reduces to checking emptiness of a 1-ATA. Furthermore, algorithms for 1-ATA emptiness can be adapted for model-checking timed automata models against 1-ATA specifications. However, existing emptiness algorithms for 1-ATA proceed by an extended region construction, and are not suitable for implementations.
In this work, we improve the existing MTL-to-1-ATA construction and develop a zone based emptiness algorithm for 1-ATAs.
15:09 - 15:22
Greybox Learning of Languages Recognizable by Event-Recording Automata
In this work, we revisit the active learning of timed languages recognizable by event-recording automata (ERA). Our framework employs a method known as grey-box learning, which enables the learning of ERA with the minimum number of states. This approach avoids learning the region automaton associated with the language, contrasting with existing methods. However, to achieve the learning of a minimum-state automaton, we must solve an NP-hard optimization problem. To circumvent this, we apply a heuristic, where the requirement for minimality is loosened, that computes a candidate automaton in polynomial time using a greedy algorithm. In our experiments, this algorithm strives to maintain a small (often minimum) automaton size. This is a joint work with Sayan Mukherjee and Jean-François Raskin. This work has been published in ATVA 2024.
15:23 - 15:36
Partial Observation: Recent Computational Results
Partial Observation is present in many applications. At the same time, the classic model of automata, markov decission process, or game with partial observation suffers from a fundamentally hard undecidability. I present subclasses of models that recover decidability of approximation while still keeping undecidability of the problem, i.e., they lie in a boundary of decidability. Also, I present results on the robustness of partial observation models, giving insights in when this models should (not) be used in applications.
Coffee Break ☕ 15:36 - 16: 06
16:06 -16:56
Stochastic Window Mean-Payoff Games
Stochastic two-player games model systems with an environment that is both adversarial and stochastic. The adversarial part of the environment is modelled by a player (Player 2) who tries to prevent the system (Player 1) from achieving its objective. We consider finitary versions of the traditional mean-payoff objective, replacing the long-run average of the payoffs by payoff average computed over a finite sliding window. Two variants have been considered: in one variant, the maximum window length is fixed and given, while in the other, it is not fixed but is required to be bounded. For both variants, we present complexity bounds and algorithmic solutions for computing strategies for Player 1 to ensure that the objective is satisfied with positive probability, with probability 1, or with probability at least p, regardless of the strategy of Player 2. The solution crucially relies on a reduction to the special case of non-stochastic two-player games. We give a general characterisation of prefix-independent objectives for which this reduction holds. The memory requirement for both players in stochastic games is also the same as in non-stochastic games by our reduction.
Joint work with Laurent Doyen and Pranshu Gaba.
17:00 - 17:40
Short talks: Games
17:00 - 17:13
Expectation in Stochastic Games with Prefix-independent Objectives
Stochastic two-player games model systems with an environment that is both adversarial and stochastic. In this talk, we look at the expected value of quantitative prefix-independent objectives in stochastic games. We show a generic reduction from the expectation problem to linearly many instances of almost-sure satisfaction of threshold Boolean objectives.
We see the applicability of the framework to compute the expected window mean-payoff measure in stochastic games. The window mean-payoff measure strengthens the classical mean-payoff measure by computing the mean-payoff over a window of bounded length that slides along an infinite path. We show that the decision problem to check if the expected value is at least a given threshold is in UP ∩ coUP. The result follows from guessing the expected values of the vertices, partitioning them into value classes where each class consists of vertices with the same value, and proving that a unique short certificate for the expected values exists.
17:14 - 17:27
Ideas for Solving Games Faster
Games on graphs model decision making in the presence of an adversary. Despite the fact that the games are usually in NP and coNP, many instances of the games are hard to solve. I will present two ideas for solving these games.
17:28 - 17:41
Bi-Objective Lexicographic Optimization in Markov Decision Processes with Related Objectives
We consider lexicographic bi-objective problems on Markov Decision Processes (MDPs), where we optimize one objective while guaranteeing optimality of another. We propose a two-stage technique for solving such problems when the objectives are related (in a way that we formalize). We instantiate our technique for two natural pairs of objectives: minimizing the (conditional) expected number of steps to a target while guaranteeing the optimal probability of reaching it; and maximizing the (conditional) expected average reward while guaranteeing an optimal probability of staying safe (w.r.t. some safe set of states).
For the first combination of objectives, which covers the classical frozen lake environment from reinforcement learning, we also report on experiments performed using a prototype implementation of our algorithm and compare it with what can be obtained from state-of-the-art probabilistic model checkers solving optimal reachability.