The full-day workshop will be organized into a morning and an afternoon session, split by a lunch break. The morning session starts with a general introduction and an overview of the challenges and potential opportunities for the control community. Afterwards, special topics and applications of physics-informed learning and a series of exciting research questions are presented by contributing speakers.
Schedule
08:45 - 09:30
Thomas Beckers (Vanderbilt University)
Reliable models of dynamical systems are essential for tasks such as failure detection, design optimization, and the implementation of safe control strategies. However, developing first-principle models for complex systems is often time-consuming and demands significant expert knowledge. While machine learning offers an alternative, learned models frequently lack trustworthiness, generalizability, and physical consistency, making them ill-suited for safety-critical control applications. To address this, a promising strategy is to decompose complex systems into smaller, more manageable subsystems. Yet, this introduces new challenges: how to interconnect ODE and PDE subsystems while preserving physical validity, how to quantify and propagate uncertainty, and how to design safe control algorithms for the overall system.
In this talk, I will present our recent work on data-driven port-Hamiltonian systems (PHS) for compositional and physically consistent modeling of complex dynamics involving both ODEs and PDEs. We employ machine learning methods with built-in uncertainty quantification to learn the Hamiltonian function of each subsystem. Unlike many physics-informed learning approaches that enforce physical constraints through penalty terms, our models are physically consistent by design. This structure naturally supports composability: physical consistency is preserved under interconnection of subsystems. Finally, I will discuss how data-driven port-Hamiltonian systems enable robust and safe learning-based control, making them a promising foundation for trustworthy and scalable modeling of complex physical systems.
Coffee Break 09:30 - 10:00
10:00 - 10:45
Sandra Hirche (Technical University of Munich)
The formulation of learning frameworks consistent with the relevant physical laws and mathematical models holds great promise for learning-based control of uncertain systems, improving data efficiency and reliability via their physical integrity. Classical parametric identification techniques also profit from exploiting physical priors but are limited to dynamical systems with low complexity since they require linearity in the parameters. Thus, with the increasing uncertainty in physical systems, developing reliable yet tractable models is still a crucial ongoing issue.
This talk will introduce a framework for physics-informed informed learning and control with Gaussian Processes (GPs). In particular, we introduce the concept of physically consistent GPs for data-driven modeling of uncertain Lagrangian systems, which constrain the function space according to the energy components of the Lagrangian and the differential equation structure, analytically guaranteeing properties such as energy conservation and quadratic form. Based on the learned model, we adopt an uncertainty-aware structure-exploiting control framework and introduce a novel data-driven momentum observer. We demonstrate and compare performances in simulations and robotics experiments.
10:45 - 11:30
Rolf Findeisen (Technical University of Darmstadt)
Optimization-based control, and in particular Model Predictive Control (MPC), provides a flexible framework to handle constraints, optimize performance, and account for system dynamics. Yet, its success hinges on the availability of accurate and structured models. This talk explores how physics-informed machine learning can support and enhance MPC by embedding physical knowledge, structure, and constraints into learning-based models—such as Gaussian processes and neural networks. We highlight how such models can be used not only to capture system dynamics, but also to represent disturbances, reference trajectories, and cost functions in a data-efficient and physically consistent manner. In the second part of the talk, we show how these physics-constrained models can be tightly integrated within the MPC framework to preserve stability and safety guarantees. The resulting approach retains the rigor and structure of model-based control while leveraging the expressiveness and adaptability of machine learning.
11:30 - 12:15
Sivaranjani Seetharaman (Purdue University)
Deep learning-based dynamical system models, such as neural ordinary differential equations (neural ODEs) and physics-informed neural networks to capture the dynamical behavior of nonlinear systems have also recently gained attention. Particularly, such models can capture nonlinear dynamical behavior well beyond the ‘local’ region in the vicinity of the equilibrium that is captured by linear models, allowing us to expand the validity and usefulness of our control designs. However, when identifying models for control, it is typically not sufficient to simply obtain a model that approximates the dynamical behavior of the system. Rather, we would ideally like to preserve essential system properties such as stability in the identified models. One such control-relevant system property that is particularly useful is dissipativity, which provides a general framework to guarantee several crucial properties like L2 stability, passivity, conicity, and sector-boundedness, and can facilitate elegant distributed and compositional control designs in large-scale systems. Therefore, it is particularly attractive to learn neural dynamical models that capture relevant system dynamics, while preserving the dissipativity property in the model. In general, imposing dissipativity constraints during neural network training is a hard problem for which no known techniques exist. In this talk, we present a two-stage framework towards learning a dissipative neural dynamical system models, including neural ODEs and neural Koopman models. First, we learn an unconstrained neural dynamical model that closely approximates the system dynamics. Next, we derive sufficient conditions to perturb the weights of the neural dynamical model to ensure dissipativity, followed by perturbation of the biases to retain the fit of the model to the trajectories of the nonlinear system. We show that these two perturbation problems can be solved independently to obtain a neural dynamical model that is guaranteed to be dissipative while closely approximating the nonlinear system. We finally demonstrate some applications of such dissipative models including compositional control design, compositional Lipschitz certificates in the newly developed ECLipsE framework, and applications in power grids.
Lunch Break 12:15 - 01:30
01:30 - 02:15
Yuanyuan Shi (University of California San Diego)
In this talk, we present a novel set of tools and methodologies on physics-informed Neural Operator Control (NOC) for ODE and PDE governed systems. Specifically, we will present NOC for predictor feedback in nonlinear delay systems, and NOC for PDE backstepping control.
The first part of the talk is about NOC for predictor feedback in nonlinear delay systems. Predictor feedback is effective for delay compensation, yet a critical challenge lies on efficient computation of the predictor operator. We introduce NOC for approximating the nonlinear predictor mapping and prove semiglobal practical stability (dependent on the learning error) of the proposed NOC predictor feedback via back-stepping transformation.
The second part of the talk is about NOC for PDE control. Model-based methods such as PDE backstepping offer provable guarantees for control but are often computationally prohibitive for real-time implementation. We propose a NOC framework that approximates the mapping from functional coefficients to control gains with desired accuracy. Using neural operator-approximated backstepping gains, we show that our method can accelerate PDE control by up to three orders of magnitude while retaining stability guarantees.
If time permits, we will also talk about physics-informed NOC for generalizable motion planning in highly dynamic environments. We propose to encode the obstacle geometries as cost functions and produce fast value function approximations for motion planning, which is defined by the Eikonal PDE.
02:15 - 03:00
Ali Mesbah (UC Berkeley)
Making optimal decisions under uncertainty is a shared problem among distinct fields. While optimal control is commonly studied in the framework of dynamic programming, it is approached with differing perspectives of the Bellman optimality condition. In one perspective, the Bellman equation is used to derive a global optimality condition useful for iterative learning of control policies through interactions with an environment. Alternatively, the Bellman equation is also widely adopted to derive tractable optimization-based control policies that satisfy a local notion of optimality. By leveraging ideas from the two perspectives, we present a local-global paradigm for optimal control suited for learning interpretable local decision makers that approximately satisfy the global Bellman equation. The benefits and practical complications in local-global learning are discussed. These aspects are exemplified through case studies, which give an overview of two distinct strategies for unifying reinforcement learning and model predictive control. We discuss the challenges and trade-offs in these local-global strategies, towards highlighting future research opportunities for safe and optimal decision-making under uncertainty.
Coffee Break 03:00 - 03:30
03:30 - 04:15
Ján Drgoňa (Johns Hopkins University)
This talk presents a scientific machine learning perspective (SciML) on modeling, optimization, and control. Specifically, we will discuss the opportunity to develop a unified SciML framework for modeling dynamical systems, learning to optimize, and learning to control methods. We demonstrate the application of these emerging SciML methods in a range of engineering case studies, including building control and power systems optimization. Furthermore, we will introduce the NeuroMANCER open-source library, facilitating the implementation and prototyping of diverse SciML methods for a broad range of application problems. The library will be introduced via interactive coding examples.
04:15 - 05:00
05:00 - 05:15