We will present a framework to analyze general first-order optimization algorithms for time-varying convex optimization using robust control tools. We assume that the temporal variability of the objective is caused by a vector of time-varying parameters, which can be measured at the time of decision but whose future values are unknown. We consider the case of strongly convex objective functions with Lipschitz continuous gradients and address the class of running algorithms where only one iteration per timestep is performed. We model these algorithms as discrete-time linear parameter varying (LPV) systems in feedback with a time-varying gradient. We leverage the approach of analyzing algorithms as uncertain control interconnections with integral quadratic constraints and generalize that framework to the time-varying case. We propose novel IQCs that capture the behavior of time-varying nonlinearities and leverage techniques from LPV control to establish upper bounds on the tracking error. These bounds can be computed offline by solving a semi-definite program and can be interpreted as input-to-state stability with respect to a disturbance signal related to the problem’s temporal variability. An important benefit of our analysis framework is the ability to capture how convergence rates and robustness of general first-order algorithms depend on specific properties of the problem and the parameters rate of variation, yielding for example new insights on the role of acceleration in sequential decision-making problems.
In this talk, I will show how the design and analysis of control, optimization, and learning algorithms for complex tasks can benefit from the application of system theory tools. Specifically, I will present a systematic methodological framework for designing and analyzing distributed algorithms for optimization and games over networks. The key idea of this approach is to combine optimization-oriented methods with consensus-oriented strategies through the lens of timescale separation. Additionally, I will address optimal control scenarios where certain elements (such as system dynamics or cost function parameters) are partially or entirely unknown. In this context, I will focus on on-policy closed-loop architectures that simultaneously optimize the control policy, learn the unknown components, and gather data from the actuated plant.
Optimization algorithms underpin decision-making in engineering, machine learning, and control, where convergence guarantees are crucial for reliability and efficiency. Classical first-order methods provide worst-case guarantees, while Learning to Optimize (L2O) leverages neural networks to discover novel update rules. However, L2O methods often lack formal stability and convergence guarantees, limiting their applicability in high-stakes domains. In this talk, we present a unifying framework that characterizes all optimization algorithms ensuring both l2 convergence for smooth, non-convex functions and linear convergence to global optima for convex functions satisfying the Polyak-Łojasiewicz (PL) inequality. Our approach employs nonlinear system theory to design update rules compatible with automatic differentiation, enabling learned optimizers that retain theoretical guarantees. By integrating a legacy linearly convergent algorithm with a stable learnable component, we enhance the average performance of classical methods—including Nesterov’s acceleration or the triple momentum method—across diverse problem landscapes while preserving worst-case convergence guarantees. These results lay the foundation for learned optimizers that are both empirically effective and theoretically sound, bridging the gap between data-driven adaptation and formal convergence analysis.
In this talk, we will investigate how optimization algorithms can be interpreted and analyzed as Lur’e systems, so that their convergence properties translate into absolute stability properties. As a case study, we will consider the Alternating Direction Method of Multipliers (ADMM) applied to constraint-coupled optimization problems. Adopting a system-theoretic perspective, we will employ tools from Lyapunov theory and passivity to demonstrate the global asymptotic stability of the origin for the closed-loop system. Furthermore, we will extend the presented approach to distributed systems, enabling the systematic design of a decentralized optimization algorithm for the same class of problems.
Model Predictive Control is among the most successful advanced control methods. Core reasons are its applicability to nonlinear multi-input systems with constraints as well as the variety of tailored numerical algorithms and powerful software tools enabling efficient real-time implementations. The application of MPC to cyber-physical systems (or to multi- agent systems) is of pivotal interest in many application domains such as energy systems, logistics and transport, and robotics.
A key break-through in development of tailored numerical methods for centralized nonlinear MPC have been real-time iteration schemes, which can be understood as the application of iterates of a Newton's method to a dynamic system. In this talk we present recent results on collaborative distributed NMPC for cyber-physical systems. We discuss a family of algorithms which is based on the decomposition of primal-dual Newton steps arising from Sequential Quadratic Programming. We explore how the underlying partially separable problem structure translates into partially separable Newton steps which can then be computed in decentralized fashion, i.e., based only on neighbor-to-neighbor communication. Moreover, we show that this numerical framework for decentralized real-time iterations in distributed NMPC allows for closed-loop stability guarantees and for scalability. Our findings are illustrated with several examples including multiple real-time implementations.
Online feedback optimization (OFO) refers to the design of feedback controllers that guide a physical systems toward the solution of an optimization problem, while enforcing physical and operational constraints. The core idea of this control paradigm is to re-purpose classic optimization algorithms as dynamic feedback controllers by integrating online measurements from the physical plant. Unlike traditional optimization-based controllers, these methods require minimal model knowledge, no external set-points, and low computational effort, while fully leveraging real-time data. Thanks to these advantages, OFO has gained significant traction among researchers and practitioners across diverse fields, including electrical power grids, transportation systems, process control, and robotics. Its effectiveness has been demonstrated through simulations, experimental validation, and even industrial deployment on real distribution grids.
In this talk, we present a general framework for designing and analyzing OFO controllers, addressing key challenges such as closed-loop stability, robustness, distributed and decentralized designs, and practical implementation aspects. Various applications, including swarm robotics, transmission grids, and recommender systems will be discussed to demonstrate the potential and generality of the proposed framework.
The Linear-Quadratic Regulator (LQR) has long been a cornerstone of optimal control, with recent studies revisiting it in the model-free setting where the system dynamics are unknown. The best known sample complexity results without heavily relying on stability assumptions use two-point gradient estimation methods, which require identical randomness across different policies—an impractical assumption in real-world settings. In this talk, I will present a policy gradient approach inspired by REINFORCE that bypasses this limitation while maintaining a sample complexity similar to the two-point method, substantially improving upon the existing literature outside the realm of two-point gradient estimates
Oscillator Ising machines (OIMs) are networks of coupled oscillators that seek the minimum energy state of an Ising model. These systems may be realized as physical devices using many different technologies, including magnetic tunnel junctions, spin-torque arrays, optoelectronic oscillators, and CMOS circuits. Since many NP-hard problems are equivalent to the minimization of an Ising Hamiltonian, OIMs may be used as dedicated hardware solvers for complex optimization problems that are intractable on conventional digital computers. One challenge, however, is that their performance is sensitive to the choice of tunable parameters and formal convergence guarantees are mostly lacking, necessitating the development of new systems theoretic tools to design and control these devices.
In this talk, I'll demonstrate that the stability of OIMs can be analyzed using tools from the spectral theory of random graphs, and that the probability of an equilibrium being asymptotically stable depends inversely on the value of the Ising Hamiltonian biasing the system toward low-energy states. Next, I'll demonstrate how introducing random heterogeneities in the regularization parameters improves convergence to global minimizers. Finally, I'll conclude by discussing applications and ongoing challenges in the development of this exciting technology.