ORGANIZERS
Kanat Camlibel, Harry Trentelman, Henk van Waarde, and Jaap Eising (University of Groningen)
Correspondence: j.eising@rug.nl
SCOPE
A great deal of the mainstream systems and control theory is based on the assumption that the mathematical model of the to-be-controlled system is known. An alternative to this model-based approach is to design feedback controllers by using data that are collected from the to-be-controlled system. Data-driven approaches gain more and more popularity as the growing complexity of engineering systems makes obtaining accurate mathematical models from e.g. first principles difficult.
This workshop aims at portraying the state-of-the-art in data-driven control. In order to provide a significant coverage of the area, we organize a two-day workshop with 12 lectures that address diverse topics from linear systems to nonlinear systems and from robust control to predictive control. The diversity of the covered topics, speakers and audience is expected to initiate cross-fertilization of ideas.
PROSPECTIVE AUDIENCE
The workshop targets a broad audience ranging from graduate students and researchers looking for an introduction to a new and active area of research, to practitioners interested in data-driven design methods. The required background is basic familiarity with systems and control theory as well as optimization. As the lectures cover a variety of relevant and modern topics, this workshop provides an excellent overview of the state-of-the-art in data-driven control.
PROGRAM 12 DECEMBER 2020
1:00PM - 1:05PM, UTC: Opening
Harry Trentelman (University of Groningen)
1:05PM - 1:40PM, UTC: The scenario approach as a tool for robust data-driven control
Marco Campi (University of Brescia)
Simone Garatti (Politecnico di Milano)
We consider a stochastic setup where partial knowledge of the dynamics of a control system is described by means of a probability distribution. During the operation of the system, new input-output measurements are acquired and the distribution is modified and updated to take notice of the new data. Along this process, one fundamental aspect is to capitalize on the novel information while keeping manageable the overall computational complexity of the procedure. On a different count, the `scenario approach' is a general methodology that has become popular in recent years for its ability to make designs that are robust with respect to a whole probability distribution by only using a sample of draws from the distribution. In this talk, we mean to merge the above two worlds by adopting a particle description of the distribution used to describe the system dynamics. As time progresses, the particles return an updated sample of plant dynamics which is used within the scenario technology to design and update a controller able to converge towards an optimal behavior when the distribution concentrates about a singleton, while preserving quantifiable levels of robustness under general operating conditions.
1:40PM - 2:15PM, UTC: Data informativity: a new perspective on data-driven analysis and control
Kanat Camlibel (University of Groningen)
Henk van Waarde (University of Groningen)
Jaap Eising (University of Groningen)
Harry Trentelman (University of Groningen)
In this talk we study data-driven analysis and control problems from the perspective of `data informativity'. We collect data from an unknown dynamical system, assumed to be contained in a given model class. On the basis of these data, we want to assess system properties and to design controllers of the unknown system. Of course, this is only possible if the given data contain enough information. Our first contribution is to formalize what is meant by `enough information' in the above context. In particular, we provide general definitions of the informativity of data for analysis and control. We then apply these definitions to a variety of analysis and design problems. For instance, in the case of input/state systems we characterize the information content required for the design of stabilizing and optimal controllers. In the same setting, we also derive new `data-driven Hautus tests' that enable controllability analysis directly using data. In the case of input/output data we study the data requirements for stabilization by dynamic measurement feedback. We also highlight several extensions including analysis and design from noisy measurements.
2:15PM - 2:50PM, UTC: Learning control from low-complexity data: dealing with noise, safety and nonlinearities
Claudio De Persis (University of Groningen)
Pietro Tesi (University of Firenze)
We have recently introduced an approach to design control policies starting from low-complexity input-output data collected during off-line experiments. The approach reduces the design to the solution of data-dependent semidefinite programs, which provide a computationally attractive way to deal with the problem of learning control from data. In this talk we summarize our most recent efforts to tackle a few important problems that are central to data-driven control and reinforcement learning: the design of controllers that are optimal and safe in the presence of noisy data with no statistical properties and that can deal with nonlinear systems.
2:50PM - 3:15PM, UTC: Break
3:15PM - 3:50PM, UTC: Convex optimization approaches to certified data driven control
Mario Sznaier (Northeastern University)
Tianyu Dai (Northeastern University)
In this talk we will cover a recent framework for certified data driven control of switched systems. Specifically, given a model structure and experimental data collected at different operating points, this framework allows for directly designing a controller that stabilizes a system that arbitrarily switches amongst all sub-systems that could have generated the observed data, without an explicit plant identification step. We will show that, through the use of duality, the problem of finding this controller, along with a Lyapunov function that certifies performance, can be recast into a a polynomial optimization form and efficiently solved, exploiting an underlying sparsity. The effectiveness of the framework will be illustrated with several examples, including control of a quadcopter. We will finish the talk by exploring the prospects of extending these results to non-linear dynamics.
3:50PM - 4:25PM, UTC: Control synthesis through the lens of first order methods
Mehran Mesbahi (University of Washington)
In this talk, I first provide a quick overview of feedback control synthesis that has historically been approached via parameterization of certificates for robustness and performance. I then consider how structured synthesis and lack of knowledge of the underlying dynamics complicate the design landscape. This is then followed by delving into control synthesis via direct policy updates using first order methods and issues that had to be addressed once such as point of view was adopted; extensions to indefinite cost structure will also be discussed.
4:25PM - 5:00PM, UTC: One ounce of modeling is worth a pound of training: data-driven control for nonlinear systems
Paulo Tabuada (University of California, Los Angeles)
Lucas Fraile (University of California, Los Angeles)
Current learning-based techniques for the control of physical systems, such as reinforcement learning, require the crunching of large amounts of data for extended periods of time. In this talk we show how to obviate this hunger for data by judicious modeling. In particular, we will show how to control unknown nonlinear systems without prior data or training. Key to our approach is the re-interpretation of several results in control theory, such as Fliess and co-workers intelligent-PIDs, feedback linearization, and adaptive control, as different examples of data-driven control. We illustrate the usefulness and applicability of the results via experimental results and conclude by speculating about the right mix of model-based and data-driven design in the context of autonomous cyber-physical systems.
PROGRAM 13 DECEMBER 2020
1:00PM - 1:05PM, UTC: Opening
Harry Trentelman (University of Groningen)
1:05PM - 1:40PM, UTC: Towards certifiable data-driven systems
Vishaal Krishnan (University of California, Riverside)
Fabio Pasqualetti (University of California, Riverside)
In this talk, we will present an overview of our efforts towards the analysis and design of data-driven systems with certified performance and robustness. Sensitivity to adversarial perturbations remains one of the main limitations of data-driven systems, which presents a hurdle to their deployment in safety-critical applications. Improving adversarial robustness of data-driven models requires adjusting their worst-case sensitivity, which is captured by their Lipschitz constant. Yet, a fundamental understanding of the limitations of this approach, as well as a general framework for training robust models, have remained critically lacking. To this end, our work contributes a comprehensive theory of Lipschitz-robust learning and a novel graph-based learning framework to train models that are provably robust to adversarial perturbations. As primary contributions of our work, (i) our analysis explains the existence of a fundamental tradeoff between performance and robustness in learning (as postulated also by other recent works), and (ii) our learning algorithms are designed to provably achieve these fundamental bounds on performance and robustness.
1:40PM - 2:15PM, UTC: Online Learning of the Kalman Filter with Logarithmic Regret
Anastasios Tsiamis (University of Pennsylvania)
George Pappas (University of Pennsylvania)
We consider the problem of predicting observations generated online by an unknown, partially observed linear system, which is driven by stochastic noise. For such systems the optimal predictor in the mean square sense is the celebrated Kalman filter, which can be explicitly computed when the system model is known. When the system model is unknown, we have to learn how to predict observations online based on finite data, suffering possibly a non-zero regret with respect to the Kalman filter's prediction. We show that it is possible to achieve a regret of the order of polylog(N) with high probability, where N is the number of observations collected. Our work is the first to provide logarithmic regret guarantees for the widely used Kalman filter. This is achieved using an online least-squares algorithm, which exploits the approximately linear relation between future observations and past observations. The regret analysis is based on the stability properties of the Kalman filter, recent statistical tools for finite sample analysis of system identification, and classical results for the analysis of least-squares algorithms for time series. Our regret analysis can also be applied for state prediction of the hidden state, in the case of unknown noise statistics but known state-space basis. A fundamental technical contribution is that our bounds hold even for the class of non-explosive systems, which includes the class of marginally stable systems, which was an open problem for the case of online prediction under stochastic noise.
2:15PM - 2:50PM, UTC: Learning Control Barrier Functions from Data
Nikolai Matni (University of Pennsylvania)
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs). We consider known nonlinear control affine dynamical systems and assume we have access to safe trajectories generated by an expert. We propose and analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable smoothness assumptions on the underlying dynamical system. Our approach is agnostic to the parameterization used to represent the CBF, assuming only that the Lipschitz constant of such functions can be efficiently bounded. Furthermore, if the CBF parameterization is convex, then under mild assumptions, so is our learning process. We also highlight how these methods can be extended to hybrid systems, allowing for novel hybrid control barrier functions to be learned from data. We end with extensive numerical evaluations of our results. To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
2:50PM - 3:15PM, UTC: Break
3:15PM - 3:50PM, UTC: Regularized and distributionally robust data-enabled predictive control
Florian Dörfler (ETH Zürich)
We consider the problem of optimal and constrained control for unknown systems. A data-enabled predictive control (DeePC) algorithm is presented that computes optimal and safe control policies using real-time feedback driving the unknown system along a desired trajectory while satisfying system constraints. Using a finite number of data samples from the unknown system, our proposed algorithm is grounded on insights from subspace identification and behavioral systems theory. In particular, we use raw unprocessed data assembled in a Hankel or Page data matrix to predict and optimize over the future system behavior. We show that, in the case of deterministic linear time-invariant systems, the DeePC algorithm is equivalent to the widely adopted Model Predictive Control (MPC), but it generally outperforms subsequent system identification and certainty-equivalence model-based control. To cope with stochasticity and nonlinearity, we propose regularizations to the constraints and objectives of the DeePC algorithm, e.g., promoting averaging and sparse selection of the data matrix columns. Using techniques from distributionally robust stochastic optimization, we prove that these regularizations indeed robustify DeePC against corrupted data. We illustrate our results with experiments and simulations from aerial robotics, power electronics, and power systems.
3:50PM - 4:25PM, UTC: Data-driven model predictive control with stability and robustness guarantees
Matthias Müller (Leibniz University Hannover)
Julian Berberich (University of Stuttgart)
Johannes Köhler (University of Stuttgart)
Frank Allgöwer (University of Stuttgart)
Model predictive control (MPC) has become one of the most successful modern control concepts, mainly thanks to its ability to directly incorporate hard state and input constraints as well as some performance criterion into the controller design. In the last decades, various MPC schemes for linear and nonlinear systems have been proposed which allow for closed-loop stability, robustness, and performance guarantees. To this end, a reasonably well identified model of the considered system is needed. However, in some applications, obtaining such a model by (classical) system identification can be difficult or the physical modelling process might be expensive. In such cases, MPC schemes are of high interest which suitably employ collected data for predictions. In this talk, we discuss a first data-based MPC scheme for which rigorous closed-loop stability and robustness guarantees can be given. To this end, a trajectory-based system representation is used for prediction which allows to express all input/output trajectories of a (linear) system in terms of a single, sufficiently exciting, input/output trajectory. Based on this, we design data-based MPC schemes and derive closed-loop stability and robustness guarantees, where for the latter various connections between design parameters, data properties, and the resulting closed-loop behavior are revealed.
4:25PM - 5:00PM, UTC: Data-based receding horizon control of linear network systems
Ahmed Allibhoy (University of California San Diego)
Jorge Cortés (University of California San Diego)
With the growing complexity of engineering systems, data-based methods are becoming increasingly popular in control theory, particularly for systems where it is too difficult to develop models from first principles and parameter identification is impractical or costly. In the context of network systems, the use of data-driven methods raises specific challenges because agents only have access to information that can be measured locally, and must coordinate with one another to predict the network response and decide their control actions. This talk proposes a distributed data-based predictive control scheme to stabilize a network system described by linear dynamics. Instead of identifying the system, agents cooperate to predict the system evolution using a data-based representation from a single sample trajectory. We employ this representation to pose a network optimization problem which characterizes the system-wide objective. We show that the controller resulting from approximately solving this problem using a distributed optimization algorithm in a receding horizon manner is stabilizing. Our results are validated through numerical simulations on various network systems.