Spring 2025
Time: Fridays, 2pm-3pm, Pacific time
Location: Hybrid - South Hall 4607 and in Zoom (link provided upon request)
Please contact Mingsong Yan (mingsongyan@ucsb.edu), Xin Su (xsu2@ucsb.edu), Ruimeng Hu (rhu@ucsb.edu), or Sui Tang (suitang@ucsb.edu) to reserve a slot.
Upcoming Seminar Schedule:
(Click the event below to see the title and abstract)
Title: Anomalous dissipation of forced Euler equations
Abstract: We consider the solutions of 3D forced Euler equations via a vanishing viscosity limit of 3D forced Navier-Stokes equations (NSE). The construction is motivated by the work of Bru\`e and De Lellis(2023). We use an exponential time steps to construct families of smooth NSEs solutions $\{u^\nu\}_\nu$ and such families' vanishing viscosity limits are the Euler solutions up to the Onsager's class $L^3_tC^{\frac{1}{3}-}_x$. Furthermore, these families $\{u^\nu\}_\nu$ exhibit viscous anomalous dissipation.
Host: Ruimeng Hu
Title: Structure-preserving discretizations and their applications
Abstract: Many models from science and engineering possess fundamental structures which are important to preserve in order for accurate and stable long-term predictions. For instance, preserving conserved quantities, such as energy, mass and momentum, are fundamental in many physical systems. Moreover, preserving dissipative quantities, such as entropy or Lyapunov functions, are also essential for predicting correct asymptotic limits. In this talk, we will survey a recent new class of conservative integrators, called the Discrete Multiplier Method (DMM), and its extension, Minimal Norm DMM. We will discuss various applications to many-body systems, geodesic flow, and particle methods in fluids and kinetic models. Moreover, we will showcase a promising application of DMM in high dimensional computational statistics. In particular, we will introduce Conservative Hamiltonian Monte Carlo using DMM to improve sampling efficacy of Hamiltonian Monte Carlo.
Host: Hector D. Ceniceros
Title: Sparse identification of nonlinear dynamics and Koopman operators with Shallow Recurrent Decoder Networks
Abstract: Spatio-temporal modeling of real-world data presents significant challenges due to high-dimensionality, noisy measurements, and limited data. In this talk, we introduce two frameworks that jointly solve the problems of sparse identification of governing equations and latent space reconstruction: the Bayesian SINDy autoencoder and SINDy-SHRED. The Bayesian SINDy autoencoder leverages a spike-and-slab prior to enable robust discovery of governing equations and latent coordinate systems, providing uncertainty estimates in low-data, high-noise settings. In our experiments, we applied the Bayesian SINDy autoencoder to real video data, marking the first example of learning governing equations directly from such data. This framework successfully identified underlying physical laws, such as accurately estimating constants like gravity from pendulum videos, even in the presence of noise and limited samples. In parallel, SINDy-SHRED integrates Gated Recurrent Units (GRUs) with a shallow decoder network to model temporal sequences and reconstruct full spatio-temporal fields using only a few sensors. Our proposed algorithm introduces a SINDy-based regularization. Beginning with an arbitrary latent state space, the dynamics of the latent space progressively converges to a SINDy-class functional. We conduct a systematic experimental study including synthetic PDE data, real-world sensor measurements for sea surface temperature, and direct video data. With no explicit encoder, SINDy-SHRED allows for efficient training with minimal hyperparameter tuning and laptop-level computing. SINDy-SHRED demonstrates robust generalization in a variety of applications with minimal to no hyperparameter adjustments. Additionally, the interpretable SINDy model of latent state dynamics enables accurate long-term video predictions, achieving state-of-the-art performance and outperforming all baseline methods considered, including Convolutional LSTM, PredRNN, ResNet, and SimVP.
Host: Paul Atzberger
Title: More is Less: Understanding Compressibility of Neural Networks via Implicit Regularization and Neural Collapse
Abstract: Despite their recent successes in various tasks, most modern machine learning algorithms lack theoretical guarantees, which are crucial to further development towards delicate tasks. One mysterious phenomenon is that, among infinitely many possible ways to fit data, the algorithms always find the "good" ones, even when the definition of "good" is not specified by the designers. In this talk I will cover the empirical and theoretical study of the connection between the good solutions in neural networks and the sparse solutions in compressed sensing with four questions in mind: What happens? When does it happen? Why does it happen? How can we improve it? The key concepts are implicit regularization, Bregman divergence, neural tangent kernel, and neural collapse.
Host: Sui Tang
Title: Recovering a first order perturbation of one-dimensional wave equation from white noise boundary data
Abstract: We consider the following inverse problem: Suppose a (1 + 1)-dimensional wave equation on R+ with zero initial conditions is excited with a Neumann boundary data modelled as a white noise process. Given also the Dirichlet data at the same point, determine the unknown first order coefficient function of the system. The inverse problem is then solved by showing that correlations of the boundary data determine the Neumann-to-Dirichlet operator in the sense of distributions, which is known to uniquely identify the coefficient. The model has potential applications in acoustic measurements of internal cross-sections of fluid pipes such as pressurised water supply pipes and vocal tract shape determination. This talk is based on a joint-work with Emilia Blåsten, Antti Kujanpää and Tapio Helin (LUT), and Lauri Oksanen (Helsinki).
Host: Hanming Zhou
Title: Koopmon trajectories in nonadiabatic quantum-classical dynamics
Abstract: In order to alleviate the computational costs of fully quantum simulations in nonadiabatic molecular dynamics, we present a mixed quantum-classical (MQC) particle method based on the theory of Koopman wavefunctions. While conventional MQC models often suffer from consistency issues such as the violation of Heisenberg's principle, we overcome these difficulties by blending Koopman's classical mechanics on Hilbert spaces with methods in symplectic geometry. The resulting continuum model enjoys both a variational and a Hamiltonian structure. However, its nonlinear character calls for suitable closures. Here, we apply a regularization technique on the underlying action principle. This step allows for a singular solution ansatz which introduces the trajectories of computational particles - the koopmons- sampling the Lagrangian classical paths in phase space. In the case of Tully's benchmark problems, the method reproduces the results of fully quantum simulations with levels of accuracy that are not achieved by standard MQC methods. In addition, the koopmon method is computationally advantageous over fully quantum hydrodynamic approaches, which are also considered in our study. As a further step, we probe the limits of the method by considering the Rabi problem in both the ultrastrong and the deep strong coupling regimes, where MQC treatments appear hardly applicable. In this case, the method succeeds in reproducing parts of the fully quantum results.
Host: Carlos Garcia-Cervera
Title: Active nematics on evolving fluid membranes with arbitrary shape and topology
Abstract: Membrane shape dynamics are central to biological processes such as morphogenesis. We study active nematics on deformable fluid membranes with complex geometry and topology. In this talk, I present a variational and geometric framework that enables simple, structure-preserving discretizations of the fluid equations on general surfaces, addressing the longstanding computational challenge. Using a generalized Killing operator and the Onsager variational principle, we derive a discrete analogue of the evolving Stokes equations, leading to a stable variational time integrator. The evolving Stokes flow defines the gradient flow of Helfrich bending energy, forming the fluid membrane model. To describe nematodynamics on general evolving surfaces, we introduce a nematic Laplacian on the complex line bundle to model relaxation, and use the Lie derivative to describe advection. To understand nematic defect dynamics on curved surfaces, we explore their electrostatic analogy and connection to conformal flattening.
Host: Paul Atzberger
Title: Leafnet algorithm for hyperbolic conservation laws
Abstract: We develop non-diffusive neural network algorithms for accurately approximating weak solutions to (parameterized) hyperbolic conservation laws. The core idea is to construct weak solutions by computing smooth local approximations within subdomains separated by discontinuity curves or surfaces, which are themselves determined from Rankine–Hugoniot jump conditions. The proposed approach enables the efficient approximation of entropic shock waves, shock wave generation, as well as wave interactions. We provide both a mathematical analysis and numerical experiments to demonstrate the robustness of the proposed methodology. This is joint work with A. Novruzi (University of Ottawa).
Host: Xu Yang
Title: Provable in-context learning of PDEs
Abstract: Transformer-based foundation models, pre-trained on a wide range of tasks with large datasets, demonstrate remarkable adaptability to diverse downstream applications, even with limited data. One of the most striking features of these models is their in-context learning (ICL) capability: when presented with a prompt containing examples from a new task alongside a query, they can make accurate predictions without requiring parameter updates. This emergent behavior has been recognized as a paradigm shift in transformers, though its theoretical underpinnings remain underexplored. In this talk, I will discuss some recent theoretical understandings of ICL for PDEs, emphasizing its approximation power and generalization capabilities. The theoretical analysis will focus on two scientific problems: elliptic PDEs and stochastic dynamical systems.
Host: Sui Tang
Title: Gibbs laws, entropy, and transport in free probability
Abstract: This is a survey on invariant random matrix models and their behavior in the large $n$ limit, which is modeled by free probability theory. We consider random self-adjoint $n \times n$ matrices $X_1^{(n)}$, \dots, $X_m^{(n)}$ with joint density constant times $e^{-n^2 V^{(n)}(x)}$ where $V^{(n)}(x) = (1/n) \operatorname{Tr}(p(X))$ for a non-commutative polynomial $p$ (and similar functions). The limiting behavior of spectral statistics as $n \to \infty$ is well-understood when $m = 1$, and when $m > 1$ and $V^{(n)}$ is strongly convex. In the convex case, the Langevin SDE, a common technique used to sample Gibbs distributions, also gives an effective way to understand the large-$n$ behavior. One can also obtain good asymptotic control over entropy and optimal transport theory for these distributions as $n \to \infty$, with applications to the von Neumann algebras that model the ideal large $n$ limit.
Host: Therese Basa Landry
Title: Developing the Statistics-Informed Neural Network as a Surrogate Modeling Tool
Abstract: The statistics-informed neural network (SINN) has been proposed as a machine learning-based stochastic trajectory generator [J. Comput. Phys. 474, 111819 (2023)]. With the capability of learning stochastic dynamics from time trajectory data and reproducing stochastic time trajectories faithfully and efficiently, this methodology can be used for surrogate modeling. To provide concrete context on how it initially came about and how it can be applied, I’ll take a detour to the development of the computationally efficient simulation methodology for gas-solid interfacial systems (e.g., heterogeneous catalysts). I’ll then continue on to describe the unique features of our SINN approach as a general tool for stochastic surrogate modeling and present our on-going work to extend our SINN approach.
Host: Paul Atzberger
Title: Stochastic description of blood vessel growth, soliton attractor and control
Abstract: The cells of incipient cancer tumors need oxygen to grow in tissues. They issue growth factors that are felt in nearby blood vessels and stimulate the formation of new blood vessels (angiogenesis) that carry the needed oxygen and nutrients to the tumor. We discuss angiogenesis at supracellular scales by stochastic differential equations and birth-death processes, derive continuum descriptions of the density of moving blood vessels and analyze a soliton-like attractor. While the soliton wave is linearly unstable to a single mode, it is possible to stabilize it by adding a feedback control to linearized equations about it. Numerical simulations indicate that the control also works for the nonlinear equations.
Host: Bjorn Birnir
Title: Function-Space Models for Deep Learning
Abstract: Deep learning has been wildly successful in practice and most state-of-the-art artificial intelligence systems are based on neural networks. Lacking, however, is a rigorous mathematical theory that adequately explains the amazing performance of deep neural networks. In this talk, I present a new mathematical framework that provides the beginning of a deeper understanding of deep learning. This framework precisely characterizes the functional properties of trained neural networks. The key mathematical tools which support this framework include transform-domain sparse regularization, the Radon transform of computed tomography, and approximation theory. This framework explains the effect of weight decay regularization in neural network training, the importance of skip connections and low-rank weight matrices in network architectures, the role of sparsity in neural networks, and explains why neural networks can perform well in high-dimensional problems. At the end of the talk we shall conclude with a number of open problems and interesting research directions.
Host: Mingsong Yan
Title: Control strategies for biofilm eradication
Abstract: Biofilms are bacterial aggregates that form on moist surfaces and are ex- tremely hard to suppress. They grow on medical implants, such as prostheses and catheters. Standard antibiotics fail to remove them posing a risk for chronic infection, prostheses removal and sepsis. We consider a nonlocal transport model for the volume fraction of alive cells coupled to reaction-diffusion equations for the concentrations of oxygen and antibiotics, set in a domain whose boundary shrinks or advances depending on the balance of creation, death and erosion. After presenting well-posedness results for the model, we will show numerical simulations suggesting that cocktails of antibiotics targeting different types of cells with adequate toxicity coefficients have the potential of fully extinguishing the biofilm in finite time. We devise bang-bang and optimal control strategies based on Kalman-Bucy filters to quantify the dosage of antibiotics required for biofilm extinction in a given time.
Host: Bjorn Birnir
Title: On Over-Parametrized Models and Sobolev Training
Abstract: With Sobolev training, neural networks are provided data about both the function of interest and its derivatives. This setting is prevalent in scientific machine learning---appearing in molecular dynamics emulators, derivative-informed neural operators, and predictors of summary statistics of chaotic dynamical systems---as well as in traditional machine learning tasks like teacher-student model distillation. However, fundamental questions remain: How does over-parameterization influence performance? What role does the signal-to-noise ratio play? And is additional derivative data always beneficial? In this work, we study these questions using tools from statistical physics and random matrix theory. In particular, we consider Sobolev training in the proportional asymptotics regime in which the problem dimensionality d, single hidden-layer features p, and training points n grow to infinity at fixed ratios. We focus on target functions modeled as single-index models (i.e., ridge functions with a single intrinsic dimension), providing theoretical insights into the effects of derivative information in high-dimensional learning. Joint with Kate Fisher, Timo Schorlepp, and Youssef Marzouk.
Host: Paul Atzberger
Title: Existence Theorems for PDEs Modeling Erosion and the Optimal Transportation of Sediment
Abstract: We prove the existence of weak global weak solutions to equations describing the sediment flow in the evolution of fluvial land-surfaces, with constant water depth. These equations describe the so-called transport-limited situation, where all the sediment can be transported away given enough water. This is in distinction to the detachment-limited situation where we must wait for rock to weather (to sediment) before it can be transported away. Earlier work shows that these equations describe the optimal transport of sediment and the evolution of the surfaces in optimal transport theory. This is joint work with Bjorn Birnir.
Host: Therese Basa Landry