Constrained optimization is fundamental to numerous safety-critical applications. While first-order iterative methods are commonly used, viewing these algorithms through their continuous-time limits—as differential equations—can yield valuable insights into stability and convergence. Among existing approaches, feedback linearization, a classical tool from nonlinear control, has recently shown promise for nonconvex constrained optimization, yet its theoretical foundations remain underexplored. In this talk, we develop a control-theoretic framework for constrained optimization based on feedback linearization. In the first-order setting, we establish global convergence rates to first-order KKT points and reveal a close connection to Sequential Quadratic Programming (SQP). Building on this connection, we extend the feedback linearization framework to the zeroth-order regime. Since zeroth-order methods rely on noisy, sample-based gradient estimates, ensuring constraint satisfaction is particularly challenging. We show that feedback linearization enables a family of zeroth-order algorithms that provably maintain feasibility despite noisy gradient information.
My talk rethinks the foundations of machine learning by shifting from a largely static, statistical view of pattern recognition to a dynamical perspective: learning is an evolving process unfolding in time through continuous interaction with a surrounding environment. I will model core learning procedures, such as optimization, sampling, and robust training, as trajectories of dynamical systems, where notions such as reward, risk, resources, safety, and information gain can be treated as system-level design objectives and constraints. I will illustrate how this viewpoint yields both conceptual clarity and concrete algorithmic benefits. We will derive min-max optimal rates for constrained optimization even when projections are replaced with surrogates that are computationally much cheaper to evaluate. The same dynamical template is extended to modern generative modeling, resulting in efficient sampling under nonconvex equality/inequality constraints via ”landing”-type dynamics; and to robustness, where we frame adversarial training against label poisoning as a constrained optimization problem. My talk will further highlight how fundamental research can unlock new application domains, from flow-based optimization in transportation networks, to energy-efficient passive soaring flight, and dynamical electromagnetic manipulation for medical microrobotics.
Recent years have witnessed a renewed interest in viewing optimization algorithms as feedback systems. This talk revolves around the benefits of this system theoretic viewpoint. Our goal is to survey the relevance of only recently emerging state-of-the art tools in robust control for the analysis of optimization algorithms and their synthesis by solving convex semi-definite programs. The emphasis is on optimization algorithms for separable cost functions over networks, which permits us to capture proximal point algorithms in which gradient information is only available after some delay, for example. Based on a novel robust version of the celebrated internal model principle, we establish required structural properties of algorithms for convergence and reveal how these insights open up the way for the synthesis of algorithms for separable optimization with guaranteed rates of convergence.
The talk is based on joint work with Christian Ebenbauer, Tobias Holicki, Dennis Gramlich, Jared Miller, Fabian Jakob and Andrea Iannelli.
Despite their groundbreaking performance, autonomous agents can fail catastrophically when deployment conditions differ from training. Achieving robustness to such distribution shifts remains a fundamental challenge in control and learning. Motivated by this, this talk presents a control-theoretic framework inspired by the free energy principle, a unifying account across machine learning, information theory, robotics, and neuroscience. I introduce the Distributionally Robust Free Energy principle, which formulates a nonlinear distributionally robust optimal control problem in the space of densities. The framework minimizes the variational free energy across an ambiguity set centered around a nominal, trained model, providing robustness guarantees against training-environment mismatches. Remarkably, despite the apparent abstract setting of the control formulation, we prove it admits a closed-form solution and provide computational methods to provably obtain optimal policies. Throughout the talk, I will illustrate these ideas with concrete learning and control examples. Across all benchmarks, DR-FREE enables the agents to complete the task even when, in contrast, state-of-the-art approaches fail.
Feedback optimization refers to a class of methods that steer a control system to a steady state that solves an optimization problem. Despite tremendous progress on the topic, an important problem remains open: enforcing state constraints at all times. The difficulty in addressing it lies on mediating between the safety enforcement and the closed-loop stability, and ensuring the equivalence between closed-loop equilibria and the optimization problem’s critical points. In this talk, we will present feedback optimization methods that enforce state constraints at all times employing high-order control barrier functions. We provide several results on the proposed controller dynamics, including well-posedness, safety guarantees, equivalence between equilibria and critical points, and local and global asymptotic stability of optima. Numerical studies will illustrate our results.
Optimization-based control laws — arising in model predictive control, feedback optimization, and control Lyapunov and barrier function frameworks as prime examples — give rise to closed-loop dynamics that are naturally modeled as interconnections between continuous-time physical systems and discrete-time algorithmic iterations. In practice, these interconnections are implemented via sampling and zero-order holds, and rely on iterative solvers that are often terminated after a finite number of steps due to computational constraints. This implementation reality departs fundamentally from the assumptions underlying most existing analyses of optimization-based controllers, which typically presume exact or infinitely fast optimization. Motivated by this gap, this talk studies the stability of interconnected continuous-time (CT) systems (modeling the physical plant) and discrete-time (DT) systems (modeling iterative optimization algorithms). We first introduce a continuous-time reduced model that captures the idealized closed-loop behavior induced by exact optimization. We then propose a two-time-scale CT–DT model that explicitly accounts for sampling and a finite number of algorithmic iterations. Under a contractivity assumption on the reduced model, our main result establishes the existence of a sampling-time threshold below which the resulting CT–DT interconnection is exponentially stable. This threshold is explicitly parameterized by the number of solver iterations, including the case of a single iteration. The analysis yields explicit stability bounds and enables a direct comparison with classical small-gain–type conditions. The theoretical results are illustrated through examples of single-iteration model predictive control.
Solving optimization problems in dynamic environments is a pervasive objective in engineering applications, encompassing problems such as dynamic model training in machine learning, target tracking in image processing, system optimization in industrial control, and portfolio optimization in finance. The need for online solutions that can adapt to dynamic performance metrics and system constraints has sparked the development of the flourishing research area known as online optimization. This presentation introduces a novel approach to designing online optimization algorithms by leveraging concepts from the internal model principle of control theory. By reframing optimization design objectives as output regulation problems, we will establish a design methodology that provides a systematic framework for conceptualizing new optimization algorithms. This approach is poised to revolutionize traditional optimization-driven designs, offering a fresh perspective on tackling time-varying optimization challenges. The presentation will cover illustrative applications and aims to provide an overview of open questions and challenges in the area.
We consider broad classes of distributed optimization algorithms based on saddle point formulations. We show that despite the nonlinearity and non-smoothness of the dynamics their omega-limit set is comprised of trajectories that solve only linear ODEs, which allows to derive necessary and sufficient conditions for convergence. We also derive optimal control interpretations of such algorithms which facilitates analysis and design.
Identifying the worst-case behaviour of an optimization method is a question that can itself be cast as an optimization problem. This is the main idea behind performance estimation, initially proposed by Yoel Drori and Marc Teboulle in 2014. In this framework, one seeks to compute the exact convergence rate of a given black-box optimization algorithm over a given class of problem instances. In many cases, including most fixed-step first-order algorithms, this computation can be reformulated into a finite-dimensional, tractable semidefinite program using necessary and sufficient interpolation conditions to describe functions involved in the problem. Solving this program provides an exact, unimprovable convergence rate, a mathematical proof guaranteeing this rate and a problem instance where this worst-case behaviour is achieved. While this computer-assisted procedure is based on numerical computations, many of the obtained results can later be converted into standard mathematical statements with analytical rates and independently checkable proofs. The knowledge of the exact worst-case behaviour of algorithms is useful to compare efficiency across methods, tune algorithmic parameters (such as step-size) and, ultimately, design new methods with improved behaviour, either using insights gained from the performance estimation procedure or with some automated method design techniques. In this talk we will survey a few of the main achievements in the area of performance estimation, showcase some recent results, and discuss some open questions.
We introduce a principled learning to optimize (L2O) framework for solving fixed-point problems involving general nonexpansive mappings. Our idea is to deliberately inject summable perturbations into a standard Krasnosel'skii–Mann iteration to improve its average-case performance over a specific distribution of problems while retaining its convergence guarantees. Under a metric sub-regularity assumption, we prove that the proposed parametrization includes only iterations that locally achieve linear convergence—up to a vanishing bias term—and that it encompasses all iterations that do so at a sufficiently fast rate. We then demonstrate how our framework can be used to augment several widely-used operator splitting methods to accelerate the solution of structured monotone inclusion problems, and validate our approach on a best approximation problem using an L2O-augmented Douglas–Rachford splitting algorithm.
Optimization problems are fundamental in science and engineering, with applications spanning control theory, signal processing, and game theory. An emerging approach for solving (possibly time-varying) optimization problems is to design dynamical systems that converge to equilibria that are also optimal solutions. This naturally raises systems-theoretic questions on stability, convergence rates, and robustness to noise. In specific and widely studied settings, contraction theory provides a unifying tool to address these questions. Contracting dynamics are indeed robustly stable, computationally-friendly, and enjoy several additional desirable properties.
In this talk, we explore continuous-time contracting dynamical systems as a systematic tool for solving convex optimization problems with a unique minimizer in both static and time-varying settings. First, we translate canonical static optimization problems into continuous-time dynamical systems and derive strong and weak contractivity conditions for such systems. Then, we characterize the convergence behavior of the resulting dynamics. Specifically, we show that convergence is exponential for strongly convex objectives and linear-exponential for convex objectives. We demonstrate the practical effectiveness of our results via several applications.
Evolutionary dynamics, when coupled to a population game, is a classical area in which systems theory has shed light on how populations of agents reach Nash Equilibria (NE) through simple learning and imitation rules. We first recall classical results in this area dictating that for monotone population games, most well-known evolutionary dynamics (e.g., replicator, best response, Smith, projection, etc.) are guaranteed to be asymptotically stable at the NE. We then cover recent advances and challenges pertaining to dynamic population games, those are, population games in which agents additionally have individual dynamics and optimize infinite-horizon rather than static payoffs. It is shown that monotonicity generically does not hold in these games, posing challenges in adopting classical techniques. We then demonstrate how higher-order evolutionary dynamics can effectively overcome some of these challenges.