The workshop features a series of invited talks exploring the interplay between control theory, machine learning, and optimization. The schedule below outlines the program of the day, including technical presentations and the concluding round-table discussion.
9:15 - 9:30
Achieving transparency through control theory
9:30 - 10:00
System Identification and eXplainable AI: Old Foundations for New Frontiers
Donatello Materassi
As Artificial Intelligence increasingly influences decision-making not only in science and engineering but also at the societal level, affecting policies, healthcare, and economic systems, the need for interpretability and trust has become paramount. This demand has fueled the rapid rise of eXplainable AI (XAI), an umbrella term that encompasses a broad range of techniques designed to make the behavior and reasoning of complex machine learning models more understandable to humans. Many of the core ideas behind XAI, however, are not entirely new. Indeed, we can often find their roots in the modeling and identification traditions of control theory. In this talk, I will argue that XAI can be viewed through the lens of system identification, where the objective is to construct an interpretable model of a complex system, in this case a learning algorithm, based on observed input–output behavior. I will show how popular XAI methods such as LIME and SHAP can be reinterpreted as identification problems or as the extraction of sensitivity metrics that follow a modeling procedure. Furthermore, I will discuss how inverse optimal control provides a rigorous framework for interpreting learned decision policies and already provides a theoretical background to support XAI objectives. These connections suggest that control theory, and in particular system identification, provides a principled foundation for the next generation of explainable and trustworthy AI systems that will shape both technology and society.
10:00 - 10:30
Stephen Tu
Machine learning (ML) and reinforcement learning (RL) approaches to feedback policy design are often viewed as lying at the opposite end of the spectrum from classical control-theoretic methods. On the one hand, ML/RL offers remarkable flexibility and generality, but typically lacks rigorous guarantees. In contrast, classical control design provides strong assurances of stability, safety, and robustness, but is usually restricted to relatively simple, low-dimensional systems.
In this talk, we argue that latent representation learning provides a natural point for these approaches to meet in the middle. Specifically, we initiate a formal study on the use of low- dimensional latent representations of dynamical systems for verifiable control synthesis. We first provide dynamics-aware conjugacy conditions which formalize the notion of reconstruction error necessary for systems analysis. We then utilize our conjugacy conditions to transfer the stability and invariance guarantees of a latent certificate function (e.g., a Lyapunov or barrier function) for a latent space controller back to the original system. Our analysis reveals several important implications for learning latent spaces and dynamics, by highlighting the necessary geometric properties which need to be preserved by the latent space, in addition to providing concrete loss functions for dynamics reconstruction that are directly related to control design.
Coffee Break 10:30 - 11:00
Closing the loop: from theory to practice
11:00 - 11:30
Michael Muehlebach
Control theory has historically evolved hand in hand with technology: the Nyquist criterion emerged from the challenges of long-distance telecommunication; the state-space paradigm and Kalman filtering were born from the needs of aerospace and navigation. Yet, in recent years, a gap has opened between control theory and technological practice. The key enablers of today’s innovation – ubiquitous sensing, cheap actuation, the Internet, and GPU computing – are being harnessed primarily by neighboring disciplines such as machine learning and robotics, while control theory often remains at the periphery. I will argue that this disconnect is neither inevitable nor permanent. The conceptual tools of control – stability, feedback, robustness, and adaptation – remain central to understanding and shaping complex learning and decision-making systems. Emerging areas such as the pretraining dynamics of large language models, diffusion-based sampling, decision-dependent optimization, and momentum-driven optimization all offer fertile ground for control-theoretic thinking. To realize this potential, however, our community must reengage deeply with technology, embracing data, computation, and experimentation as integral parts of the control enterprise.
11:30 - 12:00
Sebastian Trimpe
Model-based reinforcement learning (MBRL) has recently gained renewed attention as a framework for data-efficient and safe learning in dynamical systems. This talk presents recent advances in dyna-style MBRL, where a dynamics model is learned alongside the policy. When equipped with reliable uncertainty quantification, the learned model enables (i) data-efficient training, (ii) safe exploration, and (iii) learning directly on physical hardware—all of which address major challenges for RL. Using MBRL as a representative example, I conclude by discussing broader perspectives on the role of control theory in modern AI.
Lunch Break 12:00 - 13:30
Control design meets deep learning: decisions with guarantees
13:30 - 14:00
Keeping learning under control: Youla’s legacy for stabilizing neural policies and convergent learned algorithms
Luca Furieri
Model-based control methods may pose structural constraints that limit the flexibility of learning-based policy design. This talk argues the opposite: classical architectures such as Youla-type parametrizations enable learning with stability and universal-approximation guarantees. I showcase this idea through a unified perspective across nonlinear control and optimisation.
For control, we characterise all and only the stabilising policies as a baseline controller plus learnable residual dynamics that preserve a range of stability properties by design - from global to local exponential stability in discrete and continuous-time settings - further enabling GNN-based architectures that scale to large and unseen network topologies with network-level closed-loop guarantees. For optimisation, viewing update rules as feedback policies and classical solvers as baseline controllers, we show that all and only the linearly convergent algorithms admit a decomposition into a baseline solver plus an exponentially decaying neural correction. In both cases, rather than verifying guarantees a posteriori through case-by-case analysis or computationally expensive tests, we can learn exclusively within the space of policies that provably work.
14:00 - 14:30
Ali Mesbah
Making optimal decisions under uncertainty is a shared problem among distinct fields. While optimal control is commonly studied in the framework of dynamic programming, it is approached with differing perspectives of the Bellman optimality condition. In one perspective, the Bellman equation is used to derive a global optimality condition useful for iterative learning of control policies through interactions with an environment. Alternatively, the Bellman equation is also widely adopted to derive tractable optimization-based control policies that satisfy a local notion of optimality. By leveraging ideas from the two perspectives, we present a local-global paradigm for optimal control suited for learning interpretable local decision makers that approximately satisfy the global Bellman equation. The benefits and practical complications in local-global learning are discussed. These aspects are exemplified through case studies, which give an overview of two distinct strategies for unifying reinforcement learning and model predictive control. We discuss the challenges and trade-offs in these local-global strategies, towards highlighting future research opportunities for safe and optimal decision-making under uncertainty.
14:30 - 15:00
Lorenzo Zino
Social systems are characterized by a complex range of phenomena, spanning from the emergence of cooperation and opinion formation to epidemic spreading and collective behavior. In this talk, I will present a journey through the study of such phenomena from a control-theoretic perspective. In particular, I will discuss the importance of deriving experimentally validated mathematical models, to be enriched with data-driven approaches, illustrating how experiments, modeling, and data can jointly help elucidate the mechanisms that govern social dynamics, inspiring the design of effective interventions for real-world applications.
Coffee Break 15:00 - 15:20
15:20 - 15:50
Giulia Giordano
Control theory, supported by mathematical models that are rigorously grounded in the first principles of physical and mechanistic laws, provides a unifying framework to describe, analyse, optimise and design complex dynamical systems, ensuring stability, robustness and performance with provable guarantees. In the life sciences, where systems are inherently nonlinear, uncertain and only partially observed, and where interventions have profound safety and ethical implications, the conceptual and mathematical foundations of control theory remain essential for modelling, elucidating mechanisms and designing effective interventions. They are indispensable for transforming observations into insight and insight into action. While data-driven and learning-based approaches can successfully complement model-based ones, genuine understanding and trustworthy decision-making still require the transparent structure, interpretability, explainability and guarantees that control theory provides. Through examples from biology, ecology, epidemiology and medicine, I will illustrate how first-principle-based modelling and control yield both theoretical insight and tangible impact.
16:50 - 16:20
Chung-Han Hsieh
Finance is often viewed as a domain in which classical control-theoretic assumptions — accurate models, stationary dynamics, and well-defined disturbances — are routinely violated. Consequently, modern practice increasingly relies on data-driven and learning-based decision-making pipelines. In this talk, we revisit financial systems through a control theoretic lens and argue that these very violations make control theory more, not less, relevant. Drawing on problems such as online learning for adaptive window selection, drawdown control, frequency-dependent portfolio selection, and distributionally robust optimization, we illustrate how classical control concepts—feedback, robustness, and performance bounds—extend naturally to settings with limited model knowledge and adversarial uncertainty. We demonstrate that closed-loop guarantees provide the necessary “safety constraints” that bridge modern learning components and risk-sensitive financial decision-making processes. The key message is that control theory can discipline learning by governing how learning is embedded in closed-loop decision systems, through structure, feedback, and certifiable guarantees. This perspective suggests a future role for control theory as a foundational framework for shaping how learning-based systems reason about risk, uncertainty, and long-term performance.
Round-table discussion
16:20 - 16:50
Panelists: Ali Jadbabaie, Lucia Pallottino, Roland Tòth
Moderator: Florian Dörfler
16:50 - 17:00