Control and Optimization in the Probability Space

About the Workshop

Driven by the recent advances and surge of interest in computational optimal transport and distributionally robust optimization within the machine learning and operations research communities, this workshop brings together leading experts whose work lies at the crossroads of control theory and these emerging disciplines. This intersection of disciplines holds the promise to revolutionize the way we design high-performance control systems able to handle uncertainty in nonlinear, non-stationary and stochastic environments. The intersection between control, optimal transport and the distributionally robust paradigm indeed offers a fertile ground of new exciting theoretical challenges and modern real-world applications.

The workshop is highly interactive and ultimately we aim at creating an interdisciplinary, vibrant, space that appeals to both academic scholars and industry professionals, fostering a rich exchange of theoretical insights and practical applications.


🎓 Spotlight Presentation Opportunity Alert! 🌟
Showcase your latest exciting research at our workshop by giving a spotlight (3-5 minutes) presentation. Sign Up by contacting the organizers!



Invited Speakers

Johan Karlsson (KTH, Royal Institute of Technology)

Mario di Bernardo (University of Naples Federico II)

Maxim Raginsky (University of Illinois at Urbana-Champaign)

Bartolomeo Stellato (Princeton University)

Bart van Parys (CWI - the Netherlands)

Workshop Schedule

The schedule for this full-day event is organized around 4 interlinked tracks. The workshop will consist of talks of 25 minutes plus 5 minutes for Q&A after each individual talk. Besides this, we also built in the schedule breakout sessions to  promote interactions. These breakout sessions will be a space for students to give spotlight presentations (3-5 minutes) where they could showcase their late breaking results (if you are a student interested in giving a spotlight presentation please contact one of the organizers).

Track 1 - Distributionally Robust Control


09.00 - 09.10: Opening Remarks

09.10 - 09.40: Capture, Propagate, and Control Distributional Uncertainty (L. Aolaritei)

In this talk I will challenge the standard uncertainty models, i.e., robust (norm-bounded) and stochastic (one fixed distribution, e.g., Gaussian), and propose to model uncertainty in dynamical systems via Optimal Transport (OT) ambiguity sets. These constitute a very rich uncertainty model, which enjoys many desirable geometrical, statistical, and computational properties, and which: (1) naturally generalizes both robust and stochastic models, and (2) captures many additional real-world uncertainty phenomena (e.g., black swan events). I will then show that OT ambiguity sets are analytically tractable: they propagate easily and intuitively through linear and nonlinear (possibly corrupted by noise) transformations, and the result of the propagation is again an OT ambiguity set or can be tightly upper bounded by one. In the context of dynamical systems, this allows to consider multiple sources of uncertainty (e.g., initial condition, additive noise, multiplicative noise) and to capture in closed-form, via an OT ambiguity set, the resulting uncertainty in the state at any future time. The resulting OT ambiguity sets are also computationally tractable, and can be directly employed in various distributionally robust control formulations that can optimally trade between safety and performance.

09.45 - 10.15: Closed-loop guarantees for distributionally robust model predictive control (R. Mcallister)

Advances in optimal transport and distributionally robust optimization (DRO) have inspired a range of distributionally robust model predictive control (DRMPC) formulations. These DRMPC formulations consider the worst-case probability distribution for the disturbance within some ambiguity set and implement this solution via the standard rolling horizon framework of MPC, thereby generalizing stochastic MPC (SMPC) and robust MPC (RMPC) formulations. As with nominal, robust, and stochastic MPC formulations, the performance of this DRMPC formulation is ultimately defined by the dynamics of this closed-loop system. In this talk, we introduce a framework to analyze these closed-loop systems through the lense of distributional uncertainty. This framework is based on straightforward extensions of definitions for long-term performance and input-to-state stability to the distributionally robust setting. We then establish sufficient conditions for the DRMPC formulation, in particular the terminal cost and constraint, such that the resulting closed-loop system satisfies these definitions of distributionally robust performance and stability. We also discuss the possible benefits of DRMPC and note conditions under which DRMPC does *not* provide a long-term performance benefit relative to SMPC (for linear systems with quadratic stage costs). We conclude with a few remarks on computational demand of DRMPC problems and briefly introduce a tailored optimization algorithm to solve these problems.

10.15 - 10.30: Breakout Session 1

10.30 - 11.00: COFFEE BREAK

Track 2 - Distributionally Robust Optimization


11.00 - 11.30: Disciplined Decisions in the Face of Uncertainty and Data (B. Van Parys)

Problem uncertainty typically limits how well decisions can be tailored to the problem at hand but often can not be avoided. The availability of large quantities of data in modern applications however presents an exciting opportunity to nevertheless make better informed decisions. Capitalizing on this opportunity requires developing novel tools on the intersection between operations research, stochastics as well as data science. In a modern setting the primitive describing uncertainty is often messy data rather than classical distributions. Simply quantifying the probability of an undesirable outcome becomes a challenging uncertainty quantification problem which I approach with a distributional optimization lens. Distributional robust optimization (DRO) has recently gained prominence as a paradigm for making data-driven decisions which are protected against adverse overfitting effects. We justify the effectiveness of this paradigm by pointing out that certain DRO formulations indeed enjoy optimal statistical properties. Furthermore, DRO formulations can also be tailored to efficiently protect decisions against overfitting even when working with messy corrupted data. Finally, as such formulations are often computationally tractable they provide a practical road to the development of tomorrow's trustworthy decision systems.

11.35 - 12.05: Learning Decision-Focused Uncertainty Sets for Robust Optimization (B. Stellato)

We propose a data-driven technique to automatically learn the uncertainty sets in robust optimization based on the performance and constraint satisfaction guarantees of the optimal solutions. Our method reshapes the uncertainty sets by minimizing the expected performance across a family of problems while guaranteeing constraint satisfaction. We learn the uncertainty sets using a stochastic augmented Lagrangian method that relies on differentiating the solutions of the robust optimization problems with respect to the parameters of the uncertainty set. We show finite-sample probabilistic guarantees of constraint satisfaction using empirical process theory. Our approach is very flexible and can learn a wide variety of uncertainty sets while preserving tractability. Numerical experiments show that our method outperforms traditional approaches in robust and distributionally robust optimization in terms of out-of-sample performance and constraint satisfaction guarantees. 

12.05 - 12.20: Breakout Session 2

12.20 - 13.50: LUNCH BREAK

Track 3 - Control in the Space of Densities


13.50 - 14.20: Control and Estimation of multi-agent systems via unbalanced multi-marginal optimal transport (J. Karlsson)

Optimal transport is a versatile framework that can be adapted for solving a variety of problems in estimation and control of multi-agent or ensemble systems. Here we will illustrate how unbalanced and dynamic formulations can be used for addressing problems in traffic control, computational finance, and gene-regulatory systems. These applications give rise to problems that can be formulated as structured multi-marginal optimal transport problems, and we will show how these problems can be efficiently solved by exploiting the structures in combination with utilizing concepts from convex optimization inspired by the Sinkhorn iterations.

14.25 - 14.55: Optimal control of the Liouville equation (M. Raginsky)

In this talk, I will revisit the work of Roger Brockett on optimal control of the Liouville equation for the probability density of the state of a smooth controlled dynamical system starting from a random initial state. This formulation of the problem makes contact with the theory of optimal transportation and with nonlinear controllability. I will discuss the issues of controllability, optimal control, and the relative capabilities of open-loop and closed-loop controls in the Wasserstein space of probability densities.

15.00 - 15.30: Forward and Inverse Problems with Entropy Regularization (G. Russo)

This talk is focused on certain optimal control problems with entropy regularization that are relevant to the design of autonomous agents directly from data and involve optimizing over probability density functions. The problems formalize the design of control policies guaranteeing tracking of a desired behavior and the simultaneous minimization of a task-specific cost. After setting-up the control problem, we present results enabling the synthesis of policies from noisy data for (possibly, nonlinear) systems. We show that the problem is convex in the space of densities and give an explicit expression for the optimal policy. We then leverage the structure of the policies to tackle the inverse problem and show that it is possible to effectively learn the control cost by observing actions sampled from the policy. The talk is concluded by presenting a result that, leveraging a suitable formulation of ambiguity sets, enables making the policies distributionally robust. Results are illustrated via concrete examples.

15.30 - 16.00: COFFEE BREAK

Track 4 - Dynamics in the Space of Densities


16.00 - 16.30: Controlling large-scale multiagent systems: a continuification-based approach (M. di Bernardo)

In this talk, I will discuss a method to control large-scale multiagent systems using a continuification-based approach that transforms the microscopic, agent-level description of the system dynamics into a macroscopic, continuum-level representation, which can be employed to synthesize a control action towards a desired distribution of the agents. The continuum-level control action is then discretized at the agent-level in order to practically implement it. To confirm the effectiveness and the robustness of the proposed approach, I will complement theoretical derivations with numerical simulations and experiments and discuss the performance, stability and robustness of the proposed approach both in simpler one-dimensional settings and higher-dimensional problems.

16.35 - 17.05: Predicting Patient Treatment Outcomes using Diffusion Models and Optimal Transport (C. Bunne)

Cell populations are almost always heterogeneous in function and fate. To understand a patient’s responses to molecular drugs and design efficient treatments, it is vital to recover the underlying population dynamics and fate decisions of single cells upon perturbation. However, measuring features of single cells requires destroying them. As a result, a cell population can only be monitored with sequential snapshots, obtained by sampling a few particles that are sacrificed in exchange for measurements. In order to reconstruct individual cell fate trajectories, as well as the overall dynamics, one needs to re-align these unpaired snapshots, in order to guess for each cell what it might have become at the next step.

Optimal transport theory can provide such maps, and reconstruct these incremental changes in cell states over time. This celebrated theory provides the mathematical link that unifies the several contributions to model cellular dynamics and provides innovative links to diffusion models, i.e., powerful architectures that have revolutionized the field of generative models.

In this talk, I will present a series of machine learning models that robustly parameterize such dynamical systems and introduce how to condition the learned diffusion models on external factors as well as align them to the nature of high-throughput biological data. Finally, featuring results we obtained when employing our models in an observational clinical cohort study, I will provide a perspective on how this enhances and shapes the future of treatment design and personalized therapies.

17.10 - 17.30: Breakout Session 3

Spotlight Presentations (presenter in bold)


Organizers

Liviu Aolaritei (UC Berkeley)

Giovanni Russo (University of Salerno)