The computational resources needed to simulate high-dimensional systems such as fluid flows limit our ability to track, forecast, and control their behavior in real time. To overcome these challenges, we construct simplified low-dimensional systems called Reduced-Order Models (ROMs) which approximate relevant behavior of the original system over a range of initial conditions, parameter values, and input signals.
We aim to construct reliable ROMs using efficient computational methods and as little training data as possible. Often ROMs are obtained by projecting the original system onto a low-dimensional submanifold of the state space. A major focus of our research involves identifying appropriate manifolds, projections, and low-dimensional dynamics using data and governing equations. We develop efficient and scalable algorithms that leverage inherent problem structure to tackle these challenging computational tasks.
Our Papers on Model Reduction:
S. E. Otto, G. R. Macchio, and C. W. Rowley, (2023), Learning Nonlinear Projections for Reduced-Order Modeling of Dynamical Systems using Constrained Autoencoders, Chaos, 33(11), pdf
S. E. Otto, A. Padovan, and C. W. Rowley, (2023), Model Reduction for Nonlinear Systems by Balanced Truncation of State and Gradient Covariance, SIAM Journal on Scientific Computing, 45(5), A2325-A2355, pdf
S. E. Otto, A. Padovan, and C. W. Rowley, (2022), Optimizing Oblique Projections for Nonlinear Systems using Trajectories, SIAM Journal on Scientific Computing, 44(3), A1681-A1702, pdf
Operator learning means learning maps called "operators" between infinite-dimensional spaces of functions using data. Examples include solution operators for partial differential equations such as the operator mapping fluid flow fields at an initial time to the flow fields at later times obtained by solving the Navier-Stokes equations. Another important class of operators are the Koopman operators associated with any dynamical system. These are linear operators encoding the system's dynamics via the evolution of functions on the state space. Linearity of Koopman operators opens up the toolkits of spectral theory and operator semigroups for analysis of nonlinear systems, with the caveat that Koopman operators for most interesting systems are necessarily infinite-dimensional.
Some of the questions guiding our work include: Can we use learned operators to speed up simulations and other engineering tasks? How much data is needed to learn a reliable approximation of an operator? Can physical information be used to impose useful constraints on operator learning architectures? How do we certify the accuracy of a learned operator?
Our Papers on Operator Learning:
N. Boullé, D. Halikias, S. E. Otto, and A. Townsend, (2024), Operator Learning Without the Adjoint, Journal of Machine Learning Research, link
S. E. Otto, S. Peitz, and C. W. Rowley, (2024), Learning Bilinear Models of Actuated Koopman Generators from Partially Observed Trajectories, SIAM Journal on Applied Dynamical Systems, 23(1), p.885-923, pdf
A. Padovan, S. E. Otto, and C. W. Rowley, (2020), Analysis of Amplification Mechanisms and Cross-Frequency Interactions in Nonlinear Flows via the Harmonic Resolvent, Journal of Fluid Mechanics, 900, A14, pdf
S. E. Otto and C. W. Rowley, Linearly Recurrent Autoencoder Networks for Learning Dynamics, SIAM Journal on Applied Dynamical Systems, Vol. 18, No. 1, p.558-693, pdf
In the physical sciences, we often have to learn a lot from a small amount of data. In order to enable data-efficient learning, the architectures of our machine learning models must incorporate constraints coming from our prior knowledge about the system of interest. Built-in symmetries and conservation properties allow models to be trained with less data and to extrapolate across known transformations of the input. Can symmetries be learned from data and/or used as a form of regularization? What is the sample complexity to learn a symmetric model? Furthermore, certain structured model architectures can be trained using significantly less data than others. What model structures are consistent with underlying problem physics and enable data-efficient learning?
Our Papers on Machine Learning with Structure
S. E. Otto, N. Zolman, J. N. Kutz, and S. L. Brunton, (2024), A Unified Framework to Enforce, Discover, and Promote Symmetry in Machine Learning, Journal of Machine Learning Research (accepted, minor revisions), arXiv
S. E. Otto, C. M. Oishi, F. Amaral, J. N. Kutz, and S. L. Brunton, (2024), Machine Learning in Viscoelastic Fluids via Energy-Based Kernel Embedding, Journal of Computational Physics (accepted), arXiv
S. E. Otto, (2023), A Note on Recovering Matrices in Linear Families from Generic Matrix-Vector Products, zenodo
Our goal is to determine what a system is doing from limited sensor measurements, and then to use this information to forecast and control future dynamics. Some of the questions are work seeks to address include: Where should we place sensors to gain as much information as possible about the system's state? Where should we place actuators to exert maximum influence with the least control effort? How can we efficiently assimilate sensor measurements with expensive model-based forecasts of high-dimensional systems? How can we reliably control such as system in real time? Our approaches to these problems draw on tools from model reduction, operator learning, and machine learning with structure.
Our Papers on Sensing, Data Assimilation, and Control
S. E. Otto and C. W. Rowley, (2022), Inadequacy of Linear Methods for Minimal Sensor Placement and Feature Selection in Nonlinear Systems: a New Approach using Secants, Journal of Nonlinear Science, 32(69), pdf
S. E. Otto and C. W. Rowley, (2021), Koopman Operators for Estimation and Control of Dynamical Systems, Annual Review of Control, Robotics, and Autonomous Systems, 4, p.59-87, pdf
S. Peitz, S. E. Otto, and C. W. Rowley, (2020) Data-Driven Model Predictive Control using Interpolated Koopman Generators, SIAM Journal on Applied Dynamical Systems, 19(3), p.2162-2193, pdf