The workshop is planned to take place on June 25th, 2024, from 8:30 to 16:45, and the schedule is given below.
Morning Session (Module 1 - Matrix inequalities in NN control design)
Establishing sound stability conditions for NN-based control systems is fundamental for their secure and proper operation. Theoretical properties, such as the incremental input-to-state stability, can be actually exploited to guarantee the properties of NNs and for the design of stabilizing closed-loop control systems. Therefore, this first part focuses on the development of stability conditions for NN. The latter can be enforced (e.g., during the NN training or controller design processes), through simple linear matrix inequalities (LMIs) for a rather general class of NNs. Global stability conditions for NN systems are first presented. Considering that global conditions may not hold for some NN configurations, LMI-based local stability conditions are also addressed; in this case, the theoretical conditions are obtained by using a quadratic Lyapunov function and an adequate abstraction of the activation functions via generalized sector condition, allowing us also to compute an inner approximation of the region of attraction for the feedback system around the equilibrium point.
The LMI-based stability conditions for NN-based systems described in the previous session are exploited, in this session, for control design possibly from data. In this part, the general class of NN addressed in the previous session is considered, and assumed to be also affected by a measurement noise. We will show how to design NN-based feedback controllers guaranteeing the robust stability and desired performances of the control system via LMI-based problems. To do so, we will possibly resort to set-membership identification techniques and to the virtual reference feedback tuning approach to confer performances in a data-based fashion.
The LMI-based stability conditions for NN-based systems described in the previous session are exploited, in this session, for control design possibly from data. In this part, the general class of NN addressed in the previous session is considered, and assumed to be also affected by a measurement noise. We will show how to design NN-based feedback controllers guaranteeing the robust stability and desired performances of the control system via LMI-based problems. To do so, we will possibly resort to set-membership identification techniques and to the virtual reference feedback tuning approach to confer performances in a data-based fashion.
Finally, we show how to resort to event-triggering mechanisms (ETM), based on the use of local sector conditions related to the NN activation functions, to reduce the computational cost associated with the NN controller evaluation. Such a mechanism avoids redundant computations by updating only a portion of the layers instead of evaluating periodically the complete neural network.
Similarly to the previous sessions, sufficient matrix inequality conditions are provided to design the parameters of the event-triggering mechanism and compute an inner-approximation of the region of attraction for the feedback system around the equilibrium point. The theoretical conditions are, again, obtained by using a quadratic Lyapunov function and an adequate abstraction of the activation functions via generalized sector condition to decide whether the outputs of the layers should be transmitted through the network or not. Convex optimization procedures can be associated to the theoretical conditions in order to maximize the approximation of the region of attraction or to minimize the number of updates.
Afternoon Session (Module 2 - Unconstrained optimization approaches for stabilizing NN controllers)
In this first session, we delve into the control of stable (or pre-stabilized) systems and explore how NN policies can enhance system performance without jeopardizing stability during and after the learning phase. When the system is linear and the cost is convex, the Internal Model Control (IMC) principle — and its variations based on Youla, disturbance feedback and system level synthesis parametrizations — offer effective solutions based on efficient convex programming. Beyond the linear quadratic case, a globally optimal solution cannot be found in a tractable way, in general. We characterize a parametrization of all and only the stable closed-loop maps compatible with a given time-varying nonlinear system in terms of one stable operator to be freely designed. Based on this result, we discuss how to guarantee closed-loop stability during and after parameter optimization in practice, without requiring any explicit constraints to be satisfied and thus allowing for efficient unconstrained back-propagation. The main methodological results will be illustrated through simulations of cooperative robotic systems.
In this session, we will introduce a new approach to building neural networks and nonlinear dynamical models with built-in guarantees of stability, robustness and other certified behavioral properties. We will trace the connections between convex parameterizations from robust control to so-called “direct” parameterizations, i.e. smooth and unconstrained parameterisations of all models that satisfy prescribed conditions. These direct parameterizations enable the learning of robust static and dynamic models via simple first-order methods, without any auxiliary constraints or projections. We will explore some applications in certifiably-robust image classification, physics-informed learning of contracting nonlinear observers, and robust reinforcement learning for nonlinear partially-observed systems.
Interconnected systems are composed of multiple subsystems interacting over physical or cyber coupling channels and encompass diverse engineering applications. This calls for the use of distributed controllers, where local regulators associated with individual subsystems coordinate their actions through a communication network. In this third session, our objective is to provide an overview of strategies for designing distributed stability-preserving NN policies that specifically comply with the network topology. Firstly, we embrace the compositional properties of port-Hamiltonian (pH) systems to characterize deep Hamiltonian control policies with built-in closed-loop stability guarantees irrespective of the interconnection topology and the chosen NN parameters.
Next, we introduce a novel parametrization for distributed neural network operators that incorporates prior knowledge about the interconnection topology and stability directly into the operator. This parameterization is ``free" in the sense that, for any given parameter value, the resulting distributed operator maintains the stability of the interconnection. This freedom allows for unconstrained training, enabling the use of highly non-linear families of recursive NNs. This approach facilitates efficient learning of neural control policies over such distributed operators while preserving crucial stability properties and directly embedding the distributed nature of the system into the controller.