Schedule & Abstracts (Tentative)

The workshop will start with a tutorial aimed at the typical CDC audience. A second tutorial-style session will outline the applications of SLS to model predictive control. Subsequent sessions will explore advanced topics and current frontiers in SLS theory and applications; newcomers to SLS will be better equipped to understand the technical details after the earlier tutorial sessions. John Doyle will close the workshop with a talk that historically contextualizes SLS and explores the future directions enabled by SLS.

Unless otherwise stated, talks will be given in-person.

09:00-10:30 Intro to SLS (Tutorial)

Speaker: Jing Shuang (Lisa) Li

Abstract: System Level Synthesis (SLS) is a new parametrization for optimal control that enables distributed, scalable, and localized control algorithms. The core idea of SLS is to pivot from designing over the space of controllers to designing over the space of closed-loop maps, which describe the behavior of the full closed-loop system; this has additional benefits for applications where behavioral guarantees are desirable. This tutorial session aims to convey the fundamental mathematical ideas of SLS, so that workshop participants may more readily follow along with later sessions (all of which employ the SLS parametrization as a foundation), and also more readily apply SLS to their own research to form scalable algorithms.

10:30-10:40 Break

10:40-12:00 Model Predictive Control via SLS

Speaker: Carmen Amo Alonso

Abstract: We present the Distributed and Localized Model Predictive Control (DLMPC) algorithm for large-scale linear systems. DLMPC is a distributed closed-loop model predictive control (MPC) scheme wherein only local state and model information needs to be exchanged between subsystems for the computation and implementation of control actions. We use the System Level Synthesis (SLS) framework to reformulate the centralized MPC problem, and show that this allows us to naturally impose localized communication constraints between sub-controllers. The structure of the resulting problem can be exploited to develop an Alternating Direction Method of Multipliers (ADMM) based algorithm that allows for distributed and localized computation of closed-loop control policies. We also show that this approach enjoys feasibility and asymptotic stability guarantees. We leverage the System Level Synthesis framework to express the maximal positive robust invariant set for the closed-loop system and its corresponding Lyapunov function, both in terms of the closed-loop system responses. We the invariant set and the Lyapunov function as the terminal set and cost of the DLMPC problem respectively, and show that this is enough to guarantee recursive feasibility and stability with minimal conservatism. We provide fully distributed and localized algorithms to compute the terminal set offline, and also provide necessary additions to the online DLMPC algorithm to accommodate coupled terminal constraint and cost. In all algorithms, only local information exchanges are necessary, and computational complexity of each of the local subproblems is independent of the global system size - we demonstrate this analytically and experimentally. DLMPC is the first MPC algorithm that allows for the scalable computation and implementation of distributed closed-loop control policies and enjoys minimally conservative yet fully distributed guarantees for recursive feasibility and asymptotic stability, for both nominal and robust settings.

12:00-13:00 Lunch Break

13:00-13:30 Learning-Based Control via SLS

Speaker: Nikolai Matni

Abstract: Robust data-driven variants of SLS-based controller synthesis have been at the center of recent exciting developments at the intersection of robust learning and control. In this talk, I will provide a self-contained overview of results in this area, with a particular emphasis on the interplay between robust controller synthesis and end-to-end sample-complexity bounds for learning to control an unknown system. I will end with an overview of when you should (and should not!) use SLS for data-driven control, and highlight what I view as exciting directions for future work.

13:30-14:00 Distributed Learning and Control

Speaker: Jing Yu

Abstract: Data-driven methods have seen great success in controlling unknown single-agent dynamical systems. There is growing interest in extending the application of these techniques to distributed and safety-critical systems. In this talk, I describe the problem of stabilizing an unknown networked linear system under communication constraints and adversarial disturbances. We propose the first provably stabilizing algorithm for this problem. In particular, our approach avoids the need for system identification by leveraging a distributed version of nested convex body chasing. Our work extends System Level Synthesis to enable fully distributed learning and control under a broad class of communication delay.

14:00-14:10 Break

14:10-14:40 Infinite-Horizon SLS

Speaker: Olle Kjellqvist

Abstract: System level synthesis (SLS) is a promising approach that formulates structured optimal controller synthesis problems as convex problems. We will describe a method that solves a class of infinite-horizon SLS problems, without the finite-impulse response relaxation commonly used in previous work. This class of problems include structured LQR design under localization and delay constraints. We will first provide solutions and controller realizations for the state-feedback problems. Then, we will use the state-feedback solutions to construct optimal distributed Kalman filters with limited information exchange. We combine the distributed Kalman filter with state-feedback control to perform LQG control with localization and delay constraints. We provide agent-level implementation details for the resulting output-feedback state-space controller.

14:40-15:10 SLS for Spatially-Invariant Systems

Speaker: Emily Jensen

Abstract: We consider an optimal controller design problem for large-scale systems with spatially-invariant dynamics and spatially-distributed controls and measurements. We argue that this spatially-invariant setting is especially useful in that it allows for the derivation of explicit solutions that provide computational and analytic insight which may not be clear from numerical results alone. The underlying dynamics of the spatially-invariant systems studied may be continuous or discrete, and the spatial domain may be finite, countably infinite, or uncountable, allowing for study of e.g. vehicular platoons, distributed consensus, flow control for drag reduction, distributed arrays of micro electrical mechanical systems, and systems described by PDEs. In these settings, centralized control is not tractable and communication between distributed subcontroller units is limited. Following the System Level Synthesis (SLS) framework and imposing a prescribed spatial spread on the closed-loop responses results in a convex design problem which guarantees a degree of controller interaction locality.

For (countably) infinite extent spatially-invariant systems, we demonstrate that in addition to convexity, further interesting properties of the optimal controller design problem emerge. In this case, the H2 problem reduces to a standard model-matching problem with finitely many transfer function parameters. The number of transfer function parameters scales linearly with the amount of spatial spread permitted in the closed-loop mappings. For (uncountably) infinite extent spatially-invariant systems, i.e. systems governed by PDEs, it is known that the unconstrained optimal controller inherits an exponential spatial decay rate from the plant and the spatially-invariant design objective. We illustrate that closed-loop design via SLS may provide a method for further restricting the locality of the controller. We illustrate our results by analyzing a vehicular platoon problem and a diffusion problem.

15:10-15:20 Break

15:20-15:50 Nonlinear SLS

Speaker: Dimitar Ho

Abstract: In this talk, we will show that there is a universal connection between the closed-loop and the corresponding realizing controller in nonlinear discrete-time systems: Given an achievable stable closed-loop, we can follow a systematic procedure to construct an internally stable causal controller that realizes the desired closed-loop. In the linear system case, this relationship has been used as a key result in the recently developed System Level Synthesis (SLS), and we demonstrate how this relationship finds its analogue in the general nonlinear discrete-time system case. Necessary and sufficient conditions are presented that characterize the entire space of closed loops that are achievable by some causal controller for a given system. Furthermore, we will show that constructing the same causal controller from maps that are not achievable closed loops, still can stabilize the nonlinear system if they approximate the feasibility conditions well enough. Finally, we will discuss how this method opens up new ways towards robust nonlinear controller synthesis, by exploring two direct applications of this approach: design of trajectory tracking controllers for nonlinear systems using linear SLS controllers, and a method to stably "blend" multiple linear SLS controllers into one nonlinear controller that improves closed-loop performance.

15:50-16:20 Localized Optimization for SLS

Speaker: James Anderson

Abstract: Under the system level synthesis (SLS) framework, localization is both a desirable property of the closed-loop map and a structural feature that permits a purely distributed synthesis algorithm. However, in many cases the resulting synthesis doesn’t completely decouple and a splitting algorithm such as ADMM is required in order to share information. For model predictive control scenarios, coupling in the objective and the constraints always leads to coupled optimization problems. To date it is implicitly assumed that the resulting optimization problem(s) can be solved in time. In this talk we will present our recent work on distributed optimization formulated under the federated learning framework. The goal is to develop communication efficient and robust optimization algorithms for SLS-based distributed control. We present the FedADMM algorithm which provides a more localized approach to distributed optimization. We show that it converges when i) clients/agents drop out (i.e., stop sharing information), ii) inexact solutions to local optimization problems are returned, and iii) the local compute resources are heterogenous. Moreover, data is held locally and not transmitted, thus reducing the communication overhead.

16:20-16:50 SLS: Big Picture and Future Directions

Speaker: John Doyle

Abstract: John Doyle's talk puts SLS in an historical context. SLS theory takes what were among the most intractable problems and make them the easiest, but does not solve Witsenhausen's infamous counterexample or many problems posed since. We'll discuss the essential different assumptions SLS makes and why they both lead to extremely scalable O(1) algorithms and are so appropriate for real applications in bio, neuro, and tech networks. We'll also discuss how SLS fits into a broader and universal theory of network architecture essential for the future of these networks.