Co-PI Seminars

Co-PI Seminars on optimization, control, and learning

This series of Control & Pizza (Co-PI) seminars focuses on recent advances in optimization, control, and learning. Invited speakers will present their current research work. We will provide free pizza/food for attendees.  

This is jointly organized by Prof. Sylvia Herbert (MAE), Prof. Jorge Poveda (ECE), Prof. Yuanyuan Shi (ECE), Prof. Behrouz Touri (ECE), and Prof. Yang Zheng (ECE). 

Upcoming Seminars

Past Seminars

Winter 2024

[2024.03.20

Deceptive Nash Equilibrium Seeking in Noncooperative Games

Speaker: Michael Tang,  [Slides]

Abstract: This study introduces a new class of Nash-seeking algorithms that incorporate deception. Deceptive mechanisms are strategically employed to mislead competing entities in a noncooperative environment, leading to the emergence of optimal control policies that account for the inherent adversarial nature of the interactions. We investigate the theoretical foundations of game-theoretic deception in the context of Nash seeking, providing a comprehensive analysis of the impact of our proposed deception scheme on the convergence properties, stability, and performance of the closed loop system. Simulation results and case studies show how the proposed approach can achieve improved objective values for the deceptive players. This work contributes to the broader field of autonomous systems and game theory, offering insights into the application of deception to enhance the robustness and efficiency of multi-agent adaptive systems.


Bio: Michael Tang is a first-year Ph.D. student in the ECE. His research interests include nonlinear control, game theory, and adaptive systems. He received his B.S. degree in Electrical Engineering from UCSD.

[2024.03.06]

A Differentiable PDE Approach for building control

Speaker: Yuexin Bian,  [Slides]

Abstract: In this work, we present an innovative partial differential equation (PDE)-based learning and control framework for building HVAC control. The goal is to determine the optimal airflow supply rate and supply air temperature to minimize the energy consumption while maintaining a comfortable and healthy indoor environment. In the proposed framework, the dynamics of airflow, thermal dynamics, and air quality (measured by CO2 concentration) are modeled using PDEs. We formulate both the system learning and optimal HVAC control as PDE-constrained optimization, and we propose a gradient-descent approach based on the adjoint method to effectively learn the unknown PDE model parameters and optimize building control actions. We demonstrate that the proposed approach can accurately learn the building model on both synthetic and real-world datasets. Furthermore, the proposed approach can significantly reduce energy consumption while ensuring occupants’ comfort and safety constraints compared to traditional control methods such as maximum airflow policy, learning-based control with reinforcement learning, and optimization-based control with ODE models.

 

Bio: Yuexin Bian is a third-year Ph.D. student in the ECE department. Her research interests broadly lie in optimization and control, with applications in power systems. Before starting at UC San Diego, she completed her undergraduate degree in Electrical Engineering from Zhejiang University, China.

[2024.02.07]

Frameworks for High Dimensional Optimization

Speaker: Palma London,  [The slides will be updated after the seminar]

Abstract: I present frameworks for solving extremely large, prohibitively massive optimization problems. Today, practical applications require optimization solvers to work at extreme scales, but existing solvers do not often scale as desired. I present black-box acceleration algorithms for speeding up optimization solvers, in both distributed and parallel settings. Given a huge problem, I develop dimension reduction techniques that allow the problem to be solved in a fraction of the original time, and simultaneously makes the computation amenable to distributed computation. Efficient, dependable and secure distributed computing is increasingly fundamental to a wide range of core applications including distributed data centers, decentralized power grid, coordination of autonomous devices, and scheduling and routing problems.


In particular, I consider two optimization settings of interest. First, I consider packing linear programming (LP). LP solvers are fundamental to many problems in supply chain management, routing, learning and inference problems. I present a framework that speeds up linear programming solvers such as Cplex and Gurobi by an order of magnitude, while maintaining provably nearly optimal solutions. Secondly, I present a distributed algorithm that achieves an exponential reduction in message complexity compared to existing distributed methods. I present both empirical demonstrations and theoretical guarantees on the quality of the solution and the speedup provided by my methods.

 

Bio: Dr. Palma London received her Ph.D. and M.Sc. in Computer Science at Caltech. She received her B.S.E.E. in Electrical Engineering and B.S. in Mathematics at the University of Washington. She is currently a postdoctoral researcher at UCSD. Her research broadly spans convex optimization, machine learning and distributed algorithms.

[2024.01.24]

Spikes in the training loss of SGD, catapults and feature learning

Speaker: Libin Zhu,  [Slides]

Abstract: We first present an explanation regarding the common occurrence of spikes in the training loss when neural networks are trained with stochastic gradient descent (SGD). We provide evidence that the spikes in the training loss of SGD are "catapults", an optimization phenomenon originally observed in GD with large learning rates in [Lewkowycz et al. 2020]. We empirically show that these catapults occur in a low-dimensional subspace spanned by the top eigenvectors of the tangent kernel, for both GD and SGD. Second, we posit an explanation for how catapults lead to better generalization by demonstrating that catapults promote feature learning by increasing alignment with the Average Gradient Outer Product (AGOP) of the true predictor. Furthermore, we demonstrate that a smaller batch size in SGD induces a larger number of catapults, thereby improving AGOP alignment and test performance.

 

Bio: Libin Zhu is a PhD candidate in computer science at UCSD, working with Misha Belkin. Prior to UCSD, he received his BS in mathematics from Zhejiang University. His research has been focused on the fundamental understanding of deep learning, e.g., the optimization and generalization of neural networks.

[2024.01.10]

Convex approximations of Data-enabled Predictive Control with Applications to Mixed Traffic

Speaker: Xu Shang,  [Slides]


Abstract: Willems' fundamental lemma, which characterizes linear time-invariant (LTI) systems using input and output trajectories, has found many successful applications. Combining this with receding horizon control leads to a popular Data-EnablEd Predictive Control (DeePC) scheme. DeePC is first established for LTI systems and has been extended and applied for practical systems beyond LTI settings. However, the relationship between different DeePC variants, involving regularization and dimension reduction, remains unclear. In this talk, we will first introduce a new bi-level optimization formulation for DeePC which combines a data pre-processing step as an inner problem (system identification) and predictive control as an outer problem (online control). We will next discuss a series of convex approximations by relaxing some hard constraints in the bi-level optimization as suitable regularization terms, accounting for an implicit identification. These include some existing DeePC variants as well as two new variants, for which we establish their equivalence under appropriate settings. In the last part of this talk, we will present some remarkable empirical performances of an adapted method, called DeeP-LCC, in controlling connected and automated vehicles (CAVs) in the mixed traffic system. This talk is based on our recent work: https://arxiv.org/abs/2312.15431, and https://arxiv.org/abs/2310.00509.

 

Bio: Xu Shang is a first-year Ph.D. student, working with Prof. Yang Zheng in the SOC lab at UCSD. His research interests include learning, optimization, and control, with a particular focus on developing theoretical performance guarantees for utilizing data-driven control in nonlinear, stochastic systems and its integration with various learning algorithms. 


Xu received his B.S. in Mechanical Engineering from Shanghai Jiao Tong University and his M.S. from the University of Michigan. While at the University of Michigan, he worked on bipedal robots, which inspired him to develop control theorems and algorithms to address real-world challenges. Away from work, Xu enjoys hiking, reading, and playing soccer. 

 

Supplemental Material: 

Fall 2023

[2023.11.15]

Information as Control: The Role of Communication in Distributed Systems

Speaker: Bryce Ferguson


Abstract:  Distributed decision-making has become an increasingly popular method of reducing physical and computational difficulties in large-scale engineered systems. However, the emergent system behavior induced by these local decisions need not be optimal. As a method to elicit greater coordination, we can design not just how system components act but also how they communicate. I will discuss several ways in which information-communication channels can be exploited as a method to control overall system behavior. Particularly, I will present a game-theoretic model for distributed decision-making and demonstrate the possible benefits and costs of increasing communication in terms of gain/loss to equilibrium efficiency and solution complexity.


Bio: Bryce Ferguson is a PhD candidate in the Department of Electrical and Computer Engineering at the University of California, Santa Barbara. He works under the supervision of Jason R. Marden in the Center for Control, Dynamical-Systems, and Computation (CCDC). Bryce received his B.S. and M.S. degrees in Electrical Engineering from the University of California, Santa Barbara in 2018 and 2020, respectively, and his A.A. in Mathematics from Santa Rosa Junior College in 2016. In 2022, he was named a Rising Star in the NSF Co-sponsored Cyber-Physical Systems Rising Stars Workshop.

[2023.11.01]

Transition to linearity and an optimization theory for wide neural networks

Speaker: Chaoyue Liu


Abstract:  In this talk, I will discuss an interesting property of neural networks: transition to linearity. The property shows that neural networks simplify towards linear models, as network width increases to infinity. I will show how this mathematical property is deeply connected with the structures of neural networks. Moreover, I will show that this property regularizes the square loss function of neural networks and provides convergence guarantees for gradient descent and SGD.


Bio: Chaoyue Liu is currently a postdoc at Halıcıoğlu Data Science Institute (HDSI), UC San Diego, working with Dr. Misha Belkin. He obtained my Ph.D. degree in Computer Science from The Ohio State University. After that, he spent one year at Meta Platforms Inc., as a research scientist. He also holds B.S. and M.S. degrees in physics from Tsinghua University.

[2023.10.18]

Structure-preserving Learning of Reduced-order Models for Large-scale Dynamical Systems

Speaker: Harsh Sharma


Abstract:  Data-driven reduced-order models of large-scale computational models play a key role in a variety of tasks ranging from control of soft robotics to the design of mechanical structures to climate modeling. However, a vast majority of data-driven reduced-order models are designed to minimize the overall error over the training data, which leads to models that violate the underlying physical principles and provide inaccurate predictions outside the training data regime. This talk will present a structure-preserving learning approach that embeds physics into the data-driven operator inference framework to ensure that the learned models preserve the underlying geometric structure. The first half of this talk will focus on the data-driven symplectic model reduction of Hamiltonian systems. In the second half, we will discuss structure-preserving model reduction of large-scale nonlinear mechanical models. We will also discuss an ML-enhanced operator inference framework for predictive modeling in applications where the underlying physics is not well understood. The advantages of structure preservation in data-driven reduced-order modeling will be illustrated by various examples, which include biomimetic control of a soft robot, PDE parameter estimation from noisy and sparse data, and predictive modeling of a jointed structure from experimental measurements.


Bio: Harsh Sharma is a Postdoctoral Scholar working with Boris Krämer in the Department of Mechanical and Aerospace Engineering at UC San Diego. At UCSD, he taught an undergraduate-level course last year and currently, he is teaching a graduate-level course in the MAE department. Prior to this appointment, he received his PhD in Aerospace Engineering and MS in Mathematics from Virginia Tech. His research focuses on the intersection between structure preservation, reduced-order modeling, and deep learning, with a specific emphasis on nonintrusive model reduction of large-scale dynamical systems.

[2023.10.04]

Koopman-Hopf Reachability Analysis

Speaker: Will Sharpless


Abstract: The Hopf formula for Hamilton-Jacobi reachability (HJR) analysis has been proposed to solve high-dimensional differential games, producing the set of initial states and corresponding controller required to reach (or avoid) a target despite bounded disturbances. As a space-parallelizable optimization problem, the Hopf formula avoids the curse of dimensionality that afflicts standard dynamic-programming HJR, but is restricted to linear time-varying systems and convex games. 


To harness the Hopf formula and compute reachable sets for high-dimensional nonlinear systems, we pair the Hopf solution with Koopman theory for global linearization. By first lifting a nonlinear system to a linear space and then solving the Hopf formula, approximate reachable sets can be efficiently computed that are much more accurate than local linearizations. 


Furthermore, we construct a Koopman-Hopf disturbance-rejecting controller, and test its ability to drive a 10-dimensional nonlinear glycolysis model. We find that it significantly out-competes expectation-minimizing and game-theoretic model predictive controllers with the same Koopman linearization in the presence of bounded stochastic disturbance. Finally, we conclude with a discussion of future work towards error bounds and guarantees on the linear Hopf solution.

 

Bio: Will is a Ph.D. student at UCSD working in the SAS lab on applications of HJR algorithms to high-dimensions. His interests revolve around control and optimization in nonlinear, stochastic systems for autonomous devices in robotics, medicine and economics. He is particularly captivated by the graphs underlying differential systems and how their topology influences stability and he sometimes enjoys employing learning methods in these spaces. As an undergraduate, Will studied applied math and biology at UC Berkeley, during which he discovered a fascination for the theory of nonlinear systems and control that arose in the metabolic networks and cellular ecology. When away from work, Will is likely running, reading, or listening to music.

[2023.09.20]

Reset Control for Power Systems

Speaker: Vishal Shenoy,  [Slides]


Abstract: This work investigates achieving rapid frequency regulation in Virtual Power Plants using reset control, a type of hybrid control system introduced by J.C. Clegg in 1958. Reset controllers selectively reset certain elements of the control system, such as integrators, to dissipate energy and improve transient performance. Recent research has expanded on these techniques using passivity tools, and they have been studied in power systems to enhance microgrid performance. However, time-triggered resets in hybrid control require optimal tuning of the resetting frequency, which is challenging in systems with unknown and complex dynamics. To overcome this challenge, we explore state-triggered resets for VPPs based on system signals. Our findings are validated using the FlexPower model, which incorporates realistic and high-fidelity representations of wind turbines, photovoltaic generators, and batteries.

[2023.09.06]

Learning Koopman Eigenfunctions and Invariant Subspaces from Data

Speaker: Masih Haseli,  [Slides]


Abstract: Koopman operator theory provides an alternative description of dynamical phenomena by encoding the evolution of functions under dynamics within a vector space. The linearity of the Koopman operator, combined with its spectral properties, especially its eigenfunctions and eigenvalues, lead to a regular algebraic structure that can be leveraged for data-driven identification and prediction. Given that the underlying vector space is often infinite-dimensional, it becomes necessary to find finite-dimensional descriptions of the operator’s action. This talk will describe our recent efforts to establish objective measures for assessing the quality of finite-dimensional Koopman-based models. Additionally, we will discuss designing efficient algebraic algorithms to identify or approximate finite-dimensional models with convergence guarantees and tunable accuracy.


This is a joint work with Prof. Jorge Cortes.

 

Bio: Masih Haseli received the B.Sc. and M.Sc. degrees in electrical engineering from the Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran, in 2013 and 2015, respectively. He also earned a Ph.D. degree in Engineering Sciences (Mechanical Engineering) from the University of California, San Diego, CA, USA, in 2022. He currently serves as a postdoctoral researcher in the Department of Mechanical and Aerospace Engineering at the University of California, San Diego, CA, USA. His research interests encompass system identification, nonlinear systems, network systems, and data-driven modeling and control. 


Dr. Haseli received the Bronze Medal at the 2014 Iran National Mathematics Competition and was awarded the Best Student Paper Award at the 2021 American Control Conference. 

Spring 2023

[2023.06.07]

Structured Neural-PI Control with End-to-End Stability and Steady-State Optimality Guarantees

Speaker: Wenqi Cui,  [Slides]


Abstract: This talk focuses on the control of networked systems (especially power systems) with the goal of optimizing both transient and steady-state performances. While neural network-based nonlinear controllers have shown superior performance in various applications, their lack of provable guarantees has restricted their adoption in high-stake real-world applications. We will start with the power system frequency control problem to show the construction of nonlinear proportional-integral (PI) controllers that guarantee stability and steady-state economic dispatch.  Through a modular abstraction using equilibrium-independent passivity, we further generalize the structured-PI control to a range of networked systems. The key structure is the strict monotonicity on proportional and integral terms, which is parameterized as gradients of strictly convex neural networks (SCNN). In addition, the SCNNs serve as Lyapunov functions, giving us end-to-end performance guarantees. Experiments on power and traffic networks demonstrate the effectiveness of the proposed approach.


Bio: Wenqi Cui is a Ph.D. candidate in the Department of Electrical and Computer Engineering at the University of Washington, advised by Prof. Baosen Zhang. Previously, she received the B.Eng. degree and M.S. degree in electrical engineering from Southeast University and Zhejiang University in 2016 and 2019, respectively. Her research interests lie broadly in machine learning, control, and optimization for cyber-physical energy systems. She was selected to Rising Stars in EECS (2022) and Rising Stars in CPS (2023).

[2023.05.29]

Data-Driven Safety Quantification using Infinite-Dimensional Robust Convex Optimization

Speaker: Jared Miller,  [Slides]


Abstract: Safety quantification attaches interpretable numbers to safe trajectories of dynamical systems. Examples of such quantifications include finding the minimum distance of closest approach to an unsafe set, or finding the minimum control effort required to crash into the unsafe set. A safe trajectory with a large distance of closest approach may be acceptable, but an agent that is informed of a small distance of closest approach may want to perform actuation to increase this distance. This work represents the distance and crash-safety problems as infinite-dimensional linear programs (LPs) in smooth auxiliary functions, based on existing work in optimal control, peak estimation, and peak-minimizing control. These LPs can be extended towards modifications in dynamics, such as the analysis of systems with adverse uncertainty processes. The infinite-dimensional LPs are solved using the moment Sum-of-Squares (SOS) hierarchy, which finds a convergent sequence of outer approximations (under compactness and regularity assumptions) to the true safety quantification task.


One particular form of dynamics is highlighted: systems with disturbance-affine dynamics in which the uncertain set is semidefinite-representable (SDR). This setting occurs in data-driven systems analysis, in which the dynamics are described by a linear combination of basis functions consistent with SDR noise descriptions (e.g., L-infinity, L2, energy bounds). Crash-safety can be interpreted in the data-driven framework as finding the minimum noise corruption in the observed data required for the system to contact the unsafe set. The bottleneck in the SOS program is the Lie derivative nonnegativity constraint posed over the time-state-disturbance set. We utilize an infinite-dimensional analogue of robust counterparts (from robust optimization) to eliminate the disturbance variables, forming tractable convex optimization problems for safety quantification without introducing conservatism. This robust counterpart technique can be extended anywhere the disturbance-affine + SDR structure is found, such as barrier functions, reachable set estimation, and optimal control.


Joint work with: Mario Sznaier (Northeastern University)


Bio: Jared Miller is a postdoctoral researcher at the Robust Systems Lab at Northeastern University, advised by Mario Sznaier. He received his B.S. and M.S. degrees in Electrical Engineering from Northeastern University in 2018 and his Ph.D. in Electrical Engineering from Northeastern University in 2023. He is a recipient of the 2020 Chateaubriand Fellowship from the Office for Science Technology of the Embassy of France in the United States. He was given an Outstanding Student Paper award at the IEEE Conference on Decision and Control in 2021 and in 2022. His current research topics include safety verification and data-driven control. His interests include large-scale convex optimization, nonlinear systems, semi-algebraic geometry, and measure theory.

[2023.05.24]

Zeroth-Order Non-Convex Optimization for Cooperative Multi-Agent Systems with Diminishing Step Size and Smoothing Radius

Speaker: Xinran Zheng,  [Slides]


Abstract: In this work, we study a class of zeroth-order distributed optimization problems, where each agent can control a partial vector and observe a local cost that depends on the joint vector of all agents, and the agents can communicate with each other with time delay. We propose and study a gradient descent-based algorithm using two-point gradient estimators with diminishing smoothing parameters and diminishing step-size and we establish the convergence rate to a first-order stationary point for general nonconvex problems. A byproduct of our proposed method with diminishing step size and smoothing parameters, as opposed to the fixed-parameter scheme, is that our proposed algorithm does not require any information regarding the local cost functions. This makes the solution appealing in practice as it allows optimizing an unknown (black-box) global function. At the same time, the performance will adaptively match the problem instance parameters.


Bio: Xinran Zheng is a second-year PhD student in the ECE Department at UC San Diego working with Prof. Tara Javidi and Prof. Behrouz Touri. Her research interest lies in zeroth-order distributed optimization and its application in control, federated learning, etc. Prior to studying at UCSD, she received a BSc degree in Mathematics and Physics from Tsinghua University in 2020.

[2023.04.26]

A spectral bundle method for sparse semidefinite programming

Speaker: Hesam Mojtahedi,  [Slides]


Abstract: Semidefinite programs (SDPs) have found a wide range of applications in the field of control. When solving SDPs, it is important to take advantage of the inherent sparsity in order to improve scalability. We present a new spectral bundle algorithm that solves sparse SDPs without introducing additional variables. Using chordal decomposition, we replace a large positive semidefinite (PSD) constraint with a set of smaller coupled constraints. We then move the PSD constraints into the cost function using the exact penalty method, leading to an equivalent non-smooth penalized problem. We present a new efficient spectral bundle algorithm, where subgradient information is incorporated to update a lower approximation function at each iteration. We further establish sublinear convergences based on objective value, primal feasibility, dual feasibility, and duality gap. In particular, under Slater’s condition, the algorithm converges with the rate of O(1/ε^3 ) and the rate improves to O (1/ε ) when strict complementarity holds. The theoretical analysis is supported by our numerical experiments.

[2023.04.12]

Neural operators for Provably Accelerated PDE Feedback Control

Speaker: Luke Bhan,  [Slides]


Abstract: In this presentation, we explore the first provable application of neural operators to feedback control. In particular, we focus on accelerating boundary control of PDEs via backstepping. The PDE backstepping design is a nonlinear infinite-dimensional mapping (operator) transforming the PDE with challenging system model functions (e.g., reaction or advection coefficients) into a known stable target system via the Volterra Operator. This mapping requires the solution of kernel gain functions in the form of a Goursat PDE which is numerically expensive to compute. To overcome this, we prove the existence of an arbitrarily accurate neural operator (DeepONet) approximator for the backstepping gain computation, which is trained offline, "once and for all," using a large enough sample set of PDE model functions. Then, we prove that controllers with the neural operator approximated gains are still stabilizing in a global exponential sense. We also extend this framework to approximate the full feedback law mapping, from plant parameter functions and state measurement functions to the control input, and achieve semiglobal practical stability. The elimination of real-time recomputation of gains is transformative for adaptive control of PDEs and gain scheduling control of nonlinear PDEs.


Bio: Luke is a Ph.D. student in the ECE department at UC San Diego working on the interaction of learning and control. His research interests lie in the areas of neural operators, learning-based control, and control of partial differential equations, all of which he is exploring under the guidance of his advisor, Prof Yuanyuan Shi, with assistance from Prof. Miroslav Krstic. Prior to this, he completed an accelerated Bachelors/Masters program in Physics and Computer Science at Vanderbilt University,

Winter 2023

[2023.03.15]

Towards Rigorous Data-driven Control

Speaker: Ya-Chien Chang,  [Slides]


Abstract: The progress of artificial intelligence and reinforcement learning is leading to the development of novel control techniques. These technologies utilize data and simulation to solve complex problems that traditional methods cannot handle. However, learning-based control methods' application in real-world scenarios is challenging due to the lack of safety guarantees. Lyapunov functions are a commonly used method for proving the stability of dynamical systems. Nonetheless, finding a Lyapunov function is not always possible as there is no general form for it, and it requires satisfying Lyapunov conditions that depend on the known dynamical systems.


During this presentation, I will discuss my research on the neural Lyapunov method. We introduce a general framework to construct Lyapunov functions and control policies simultaneously, simplifying the control Lyapunov function design process and achieving significantly larger attraction regions than current methods. Furthermore, I will elaborate on extending the neural Lyapunov method to enhance the stability of learned controllers for unknown systems. This method integrates neural Lyapunov techniques as an additional critic value into actor-critic reinforcement learning algorithms. Finally, I will demonstrate how this approach outperforms state-of-the-art policy optimization, highlighting its numerous advantages.


Bio: Ya-Chien is a fourth-year Ph.D. student in CSE at UC San Diego, advised by Prof. Sicun Gao. She received a master's degree in Applied Mathematics from National Tsing Hua University. Her research interests cover safe reinforcement learning, control theory, and optimization. Her current focus is on synthesizing stability certificates for data-driven control systems. Ya-Chien's work has been published in top conferences, and she has been honored with awards for excellence in research from the CSE department.

[2023.03.01]

Direct Policy Search for State-feedback Hinf Robust Control Synthesis

Speaker: Xingang Guo,  [Slides]


Abstract: Direct policy search has been widely applied in modern reinforcement learning and continuous control. However, the theoretical properties of direct policy search on nonsmooth robust control synthesis have not been fully understood. The optimal $\mathcal{H}_\infty$ control framework aims at designing a policy to minimize the closed-loop $\mathcal{H}_\infty$ norm, and is arguably the most fundamental robust control paradigm. The primary focus of this talk is on the convergence results for the direct policy search methods on the state-feedback $\mathcal{H}_\infty$ design. We show that direct policy search is guaranteed to find the global solution of the robust $\mathcal{H}_\infty$ state-feedback control design problem despite the resultant policy optimization problem is nonconvex and nonsmooth. In particular, we show that for this nonsmooth optimization problem, all Clarke stationary points are global minimum. Next, we identify the coerciveness of the closed-loop $\mathcal{H}_\infty$ objective function, and prove that all the sublevel sets of the resultant policy search problem are compact. Based on these properties, we show that Goldstein's subgradient method and its implementable variants can be guaranteed to stay in the nonconvex feasible set and eventually find the global optimal solution of the $\mathcal{H}_\infty$ state-feedback synthesis problem. This work builds a new connection between nonconvex nonsmooth optimization theory and robust control, leading to an interesting global convergence result for direct policy search on optimal $\mathcal{H}_\infty$ synthesis.


Bio: Xingang Guo is a Ph.D. student at department Electrical and Computer Engineering (ECE) and Coordinate Science Laboratory (CSL) at the University of Illinois at Urbana-Champaign (UIUC), advised by Prof. Bin Hu. His research interests include control, optimization, machine learning, and their intersections. Previously, he obtained an M.S. degree in Electrical and Computer Engineering (ECE) from King Abdullah University of Science and Technology (KAUST) in 2020, where his research focuses on the process control with applications in membrane based water systems.

[2023.03.01]

Informativity in data-driven control

Speaker: Jaap Eising,  [Slides]


Abstract: For a myriad of reasons, data-driven analysis and control have recently attracted a lot of attention. Much of this attention has essentially been building on the well-developed field of system identification, taking the following blueprint to controller design: First, determine the only or "best" model within a given class that is compatible with the collected measurements. Then apply a known, model-based method to obtain the required control performance. Recently, a different approach was introduced, namely one of data informativity, that is, reasoning about the information contained in the measurements directly. A few questions central to this viewpoint are: What conditions on the data are necessary and/or sufficient to guarantee the existence of a stabilizing controller? Can we perform control tasks without measuring or reconstructing the state? What can be deduced from samples of continuous signals? In this talk, we will answer these questions using classical techniques and methods from fields such as system identification and robust control. 


Bio: Jaap Eising is a postdoctoral researcher at the Department of Mechanical and Aerospace Engineering at the University of California, San Diego. He attained the Ph.D. degree at the University of Groningen in 2021, after obtaining the master's degree in Mathematics at the same university in 2017. His research interests include constrained linear systems, systems described by difference/differential inclusions, data-driven control and geometric systems theory.