July 28, 2025
Hybrid numerical and machine learning methods that incorporate data-driven modelling with prior physical knowledge have emerged as a powerful paradigm in computational mathematics. By leveraging symmetries, conservation laws, partial differential equation models, and other geometric structures, structure-preserving approaches enable more efficient and qualitatively accurate data-driven simulations of physical phenomena. Applications of this approach in the context of operator learning are at the forefront of this progress, which holds significant potential for enhancing the reliability and accuracy of machine learning-based simulations, making them more robust for decision-making in scientific and engineering applications. This symposium will bring together leading and early-career researchers working on structure-preserving approaches for operator learning to share recent advances in theoretical foundations of these hybrid methods and explore their applications to the simulation of large-scale physical phenomena.
08:00-08:25 Zachary G. Nicolaou (University of Washington)
Signature of Glassy Dynamics in Dynamic Modes Decompositions
Glasses are traditionally characterized by their rugged landscape of disordered low-energy states and their slow relaxation towards thermodynamic equilibrium. Far from equilibrium, dynamical forms of glassy behavior with anomalous algebraic relaxation have also been noted, e.g., in networks of coupled oscillators. Due to their disordered and high-dimensional nature, such systems have been difficult to study analytically in the past. Here, we show that the gap between oscillatory and decaying modes in the Koopman spectrum vanishes in systems exhibiting algebraic relaxation. The dynamic mode decomposition, which is a data-driven spectral computation that approximates the Koopman spectrum, thus provides a model-agnostic signature for detecting and analyzing glassy dynamics. We demonstrate the utility of our approach through both a minimal example of one-dimensional ODEs and a high-dimensional example of coupled oscillators.
08:30-08:55 Emil M. Constantinescu (Argonne National Laboratory)
Multiscale Partial Differential Equation Dynamics with Neural Network Operators
Neural ordinary differential equations (NODEs) enable efficient modeling of subgrid-scale effects in PDEs through a hybrid framework that synthesizes traditional numerical methods with data-driven approaches. By integrating NODEs directly into PDE formulations via the method of lines and incorporating conservation laws, we establish a rigorous methodology for capturing fine-scale dynamics without the computational burden of high-resolution simulations. We validate the framework through comprehensive experiments on three canonical systems: the two-scale Lorenz 96 equation, convection-diffusion equation, and compressible Navier-Stokes equations. This work advances operator learning techniques at the intersection of numerical analysis and machine learning, demonstrating robust capabilities for scientific and engineering applications.
09:00-09:25 Owen Brook (Imperial College London)
Data-Driven Stabilisation of Unstable Periodic Orbits of the Three Body Problem
Many different models of the physical world exhibit chaotic dynamics, from fluids flows and chemical reactions to celestial mechanics. The study of three body problem (3BP) and the many families of unstable periodic orbits (UPOs) within it have provided fundamental insight into chaotic dynamics as far back as the 19th century. In this talk we present a novel and interpretable data-driven approach for the state-dependent control of UPOs of the 3BP, through leveraging the inherent sensitivity of chaos. The 3BP is inherently challenging to sample due to the volume-preservation property of conservative systems which we overcome by utilising prior knowledge of UPOs and a novel augmentation strategy. This enables sample-efficient discovery of a verifiable and accurate Poincaré map in as few as 55 data points. To stabilise the UPOs we apply small thrusts once each revolution, determined by solving a convex problem formed from the linearised map and a system of linear matrix inequalities. We constrain the norm of the decision variables in this problem, resulting in thrusts directed along the local stable manifold. Critically, this locally optimal behaviour is achieved in a computationally efficient manner, without the need for an optimisation problem using many expensive simulations. We demonstrate this sample-efficient, low-energy method across several orbit families in the 3BP, with potential applications ranging from robotics and spacecraft control to fluid dynamics.
09:30-09:55 Aditi Krishnapriyan (University of California, Berkeley)
Bridging deep learning and numerical methods through differentiable solvers: balancing speed, accuracy, and scalability
Machine learning (ML) is increasingly playing a pivotal role in spatiotemporal modeling. A number of open questions remain on the best learning strategies to maximize the utility of machine learning while ensuring the validity of such predictions at test time (i.e., "deployment"). This talk will focus on machine learning methods for neural PDE solvers, with an emphasis on broad learning strategies that are applicable across a wide variety of systems and neural network architectures. Some topics I will discuss include: developing expressive neural network architectures that can be trained at scale on large-scale 3D problems such as turbulent fluid flow, using self-supervised learning to change the basis of learning with spectral methods to solve fluid dynamics and transport PDE problems, and “simulation-in-the-loop” approaches via incorporating PDE-constrained optimization as a layer in neural networks. In these settings, I will discuss both strategies for scalable training and deployment, while balancing speed and accuracy.
16:00-16:25 Seth Taylor (McGill University)
Diffeomorphic Neural Operator Learning
This talk introduces an operator-valued approximation technique for the data-driven reconstruction of a class of evolution operators. The core idea is to approximate the solution operator as a map into the space of composition operators. We draw parallels between this technique and shape analysis via landmark matching in an infinite-dimensional setting and propose a computationally tractable algorithm using neural operators (NOs). Our formulation exhibits novel resolution properties which complement, yet are distinct from, the discretization invariance exhibited by NOs. We characterize these properties analytically and illustrate them numerically for the data-driven forecasting of a turbulent fluid flow. This geometric operator learning approach indicates a clear performance benefit by embedding known infinite-dimensional geometric structure into the learning problem as a hard constraint.
16:30-16:55 Chris Budd (University of Bath)
Operator Learning Via Non-Smooth Dynamics
Neural nets are often used both to learn operators and also to identify data sets. A typical architecture for doing this is a ResNET, which in turn can be thought of as a discretisation of a neural ODE. We can then pose the question of what sort of operators, or data set labelling, can be learned through such an architecture. If classical smooth ODEs are used to describe the neural ODE then there are restrictions on what is possible. However, in practice, the activation functions used in a ResNET are non-smooth and the resulting neural ODEs are also non-smooth. This allows them to have much greater expressivity, and this expressivity can be explored by using the theory of non-smooth dynamical systems. In this talk I will describe some of this theory, and will show not only how it explains some of the great expressivity possible in a ResNET, but also indicates what might be appropriate architectures and training meshanisms for them to approximate complex operators.
17:00-17:25 Davide Murari (University of Cambridge)
Symplectic Neural Flows
The Hamiltonian formalism provides a powerful framework for describing several dynamical systems with conserved energy. The flow of a Hamiltonian system preserves a volume form, a symplectic form, and the Hamiltonian energy. This talk focuses on canonical Hamiltonian systems on Euclidean spaces. We introduce a symplectic neural network designed to approximate the flow map of a given Hamiltonian system. Our architecture demonstrates improved long-term behaviour compared to unconstrained alternatives, even when both approaches perform similarly over short intervals. We also apply the proposed method to data-driven tasks, showing its effectiveness in approximating the flow map of unknown Hamiltonian systems.
17:30-17:55 Pratham Lalwani (University of California, Merced)
Compositional Physics-Informed Neural Flows
Physics-Informed Neural Networks (PINN) have recently been applied to solve many forward and inverse problems. For time-dependent problems, PINN is trained by minimizing a residual loss function sampled on a fixed spatial domain and time interval, which can result in poor generalizability beyond its training time interval. Instead, we propose Compositional Physics-Informed Neural Flow (CPINF) to learn flow maps of dynamical systems, while preserving its compositional structure and enabling rigorous error estimation on its prediction beyond the training time interval. Specifically, for dynamical systems with a compact positively invariant set, we show that the error of CPINF on future time intervals can be bounded by the training error on its initial training time interval and the sampling error of the residual.