Winter 2023

Jan 20, Weinan Wang (UArizona)


Title: Recent progress on the local well-posedness theory for some kinetic models 

Abstract: The Boltzmann and Landau equations are two fundamental models in kinetic theory. They are nonlocal and nonlinear equations for which (large data) global well-posedness is an extremely difficult problem that is nearly completely open. In this talk, I will discuss two more tractable and related questions: (1) Existence of solutions to the Boltzmann equation and (2) Schauder estimates and their application to uniqueness of solutions to the Landau equation. At the end of the talk, I will discuss some open problems and future work. This is based on joint work with Christopher Henderson.  

Host person: Quyuan Lin

Jan 27,  John Harlim (PSU, 2-3 pm)


Title: Solving PDEs on unknown domain with manifold learning algorithm 

Abstract: I will discuss recent efforts in using a manifold learning algorithm (the Diffusion Maps algorithm) to solve elliptic PDEs on unknown manifolds using point cloud data. The key idea rests on the fact that away from the boundary, the second-order elliptic differential operators can be approximated by integral operators defined with appropriate Gaussian kernels. On manifolds with boundaries, however, such an approximation is only valid for functions that satisfy the Neumann boundary condition. Motivated by the classical ghost-point correction in the finite-difference method for solving Neumann problems, we extend the diffusion maps algorithm with ghost points such that it is a consistent estimator in the pointwise sense even near the boundary. I will demonstrate some numerical results based on the fictitious point corrections, which we call the Ghost Point Diffusion Maps (GPDM). I will also discuss an application on Bayesian elliptic inverse problems, which motivates the need for a fast PDE solver. If time permits, I will discuss a spectral framework that allows for a practical approximation to the solution operator with theoretical guarantees. 

Host person: Paul Atzberger

Feb 3, Pedram Hassanzadeh (Rice)


Title: Integrating the spectral analyses of neural networks and nonlinear physics for explainability, generalizability, and stability

Abstract: In recent years, there has been substantial interest in using deep neural networks (NNs) to improve the modeling and prediction of complex, multiscale, nonlinear dynamical systems such as turbulent flows and Earth’s climate. In idealized settings, there has been some progress for a wide range of applications from data-driven spatio-temporal forecasting to long-term emulation to subgrid-scale modeling. However, to make these approaches practical and operational, i.e., scalable to real-world problems, a number of major questions and challenges need to be addressed. These include 1) instabilities and the emergence of unphysical behavior, e.g., due to how errors amplify through NNs, 2) learning in the small-data regime, 3) interpretability based on physics, and 4) extrapolation (e.g., to different parameters, forcings, and regimes) which is essential for applications to non-stationary systems such as a changing climate. While some progress has been made in addressing (1)-(4), the approaches have been often ad-hoc, as currently there is no rigorous framework to analyze deep NNs and develop systematic and general solutions to (1)-(4). In this talk, I will discuss some of the approaches to address (1)-(4). Then I will introduce a new framework that combines the spectral (Fourier) analyses of NNs and nonlinear physics, and leverages recent advances in theory and applications of deep learning, to move toward rigorous analysis of deep NNs for applications involving dynamical systems. I will use examples from subgrid-scale modeling of 2D turbulence and Rayleigh-Bernard turbulence and forecasting extreme weather to discuss these methods and ideas..


Host person: Paul Atzberger

Feb 10, Jinkai Li (SCNU)


Title: Global well-posedness of the primitive equations coupled with moisture dynamics 

Abstract: In this talk we will present some recent results on the well-posedness of the coupled system of the primitive equations with the moisture system for the warm cloud. Multi-phases and phase changes are taken into consideration and both the simplified case and the thermodynamically refined case will be considered. For the simplified case, we assume that the dry air and water vaper have the same gas constants and heat capacities and ignore the heat capacity of the liquid water. If time allowed, some results rigorously justifing the hydrostatic approximatoin from the Navier-Stokes equations to the primitive equations in both the frameworks of strong solutions and z-weak solutions will also be presented.


Host person: Quyuan Lin

Feb 17,  Lu Lu (UPenn, 1-2 pm)


Title: Deep neural operators for multiphysics, multiscale, & multifidelity problems 

Abstract: It is widely known that neural networks (NNs) are universal approximators of continuous functions. However, a less known but powerful result is that a NN can accurately approximate any nonlinear continuous operator. This universal approximation theorem of operators is suggestive of the structure and potential of deep neural networks (DNNs) in learning continuous operators or complex systems from streams of scattered data. In this talk, I will present the deep operator network (DeepONet) to learn various explicit operators, such as integrals and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. I will also present several extensions of DeepONet, such as DeepM&Mnet for multiphysics problems, DeepONet with proper orthogonal decomposition (POD-DeepONet), MIONet for multiple-input operators, and multifidelity DeepONet. More generally, DeepONet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously. I will demonstrate the effectiveness of DeepONet and its extensions to diverse multiphysics and multiscale problems, such as nanoscale heat transport, bubble growth dynamics, high-speed boundary layers, electroconvection, and hypersonics.

Host person: Paul Atzberger

Feb 17Anuj Karpatne (VT, 2-3 pm)


Title: Knowledge-guided Machine Learning: Advances in An Emerging Field Combining Scientific Knowledge with Machine Learning

Abstract: This talk will provide an introduction to the rapidly growing field of Knowledge-guided Machine Learning (KGML) that aims to combine scientific knowledge with data in the ML process to produce generalizable and physically consistent solutions even with limited training data. This talk will describe several ways in which scientific knowledge can be combined with machine learning methods using case studies of on-going research in various disciplines including aquatic sciences, fluid dynamics, quantum mechanics, and biology.  These case studies will illustrate multiple research themes in knowledge-guided machine learning, ranging from knowledge-guided design and learning of neural networks to construction of hybrid-science-data models.

Host person: Paul Atzberger

Mar 3Bogdan Raita


Title: Concentration effects in modern pde

Abstract: The aim of this talk is to review old and new results concerning the interaction between nonlinearity and weak convergence of pde constrained sequences. This is a ubiquitous theme in the study of nonlinear pde, of which we will place special emphasis on problems with variational structure. We will review classical results in the study of weak (lower semi)continuity of variational integrals, concerning A-quasiconvexity, compensated compactness, and null Lagrangians. We will conclude with new results pertaining primarily to concentration effects in weak convergence, which we used to answer questions of Coifman–Lions–Meyer–Semmes and De Philippis. We will express our results in the language of generalized Young measures (cf. Di Perna–Majda measures, defect measures). Joint work with A. Guerra, J. Kristensen, and M. Schrecker.


Host person: Davit Harutyunyan

Mar 10Youngsoo Choi


Title: Physics-guided data-driven simulations

Abstract: A computationally expensive physical simulation is a huge bottleneck to advance in science and technology. Fortunately, many data-driven approaches have emerged to accelerate those simulations, thanks to the recent advance in machine learning (ML) and artificial intelligence. For example, a well-trained 2D convolutional deep neural network can predict the solution of complex Richtmyer–Meshkov instability problem with a speed-up of 100,000x [1]. However, the traditional black-box ML models do not incorporate existing governing equations, which embed underlying physics, such as conservation of mass, momentum, and energy. Therefore, the black-box ML models often violate important physics law, which greatly concerns physicists, and require big data to compensate the missing physics information. Additionally, it comes with other disadvantages, such as non-structure-preserving, computationally expensive training phase, non-interpretability, and vulnerability in extrapolation. To resolve these issues, we can bring physics into data-driven framework. Physics can be incorporated in different stages of data-driven modeling, i.e., sampling stage and model-building stage. Physics-informed greedy sampling procedure minimizes the number of required training data for a target accuracy [2]. Physics-guided data-driven model better preserves physical structure and more robust in extrapolation than traditional black-box ML models. Numerical results, e.g., hydrodynamics [3,4], particle transport [5], plasma physics, and 3D printing, will be shown to demonstrate the performance of the data-driven approaches. The benefits of the data-driven approaches will also be illustrated in multi-query decision-making applications, such as design optimization [6,7].

Host person: Paul Atzberger