PDE & Applied math seminar

Wednesday 10:00 -10:50 am, https://ucr.zoom.us/j/97606227247 

Organizers :  Weitao Chen / Heyrim Cho / Yat Tin Chow / Qixuan Wang / Jia Guo / Mykhailo Potomkin

Past Organizers :  Mark Alber / James Kelliher / Amir Moradifam

In Winter 2022, the PDE & Applied math seminar will be held both through zoom and in person. Specific information about format of each talk will be provided in the email announcement and posted below. If you're interested in attending the seminar, please contact Dr. Yat Tin Chow (yattinc@ucr.edu) and Dr. Heyrim Cho (heyrimc@ucr.edu).


Winter 2023 Schedule 

Jan 09  16:00 (Mon)* Zahra Aminzare (University of Iowa) - joint with Applied Math Colloquium 

Jan 11 10:00 (Wed) Organizational meeting

Jan 17 16:00 (Tue)* Peijie Zhou (UCI) - joint with Applied Math Colloquium 

Jan 19 16:00 (Thr)* Maziar Raissi (University of Colorado, Boulder)  - joint with Applied Math Colloquium 

Jan 24 16:00 (Tue)* Gregory Handy (University of Chicago) - joint with Applied Math Colloquium 

Jan 26 16:00 (Thr)* Shuang Liu (UCSD) - joint with Applied Math Colloquium 

Jan 27 16:00 (Fri)* Gasior, Kelsey (University of Ottawa) - joint with Applied Math Colloquium 

Feb 1  16:00 (Wed)* Yiwei Wang (UCR) 

Feb 8  10:00 (Wed)  Siting Liu (UCLA) 

Feb 15 10:00 (Wed) Yuyuan “Lance” Ouyang (Clemson University)

Feb 22 10:00 (Wed)  Nathaniel Trask (Sandia National Laboratories)

Mar 1  10:00 (Wed)  Daniele Venturi (UCSC) 

Mar 8  10:00 (Wed) Kookjin Lee (Arizona State University) 

Mar 15 10:00 (Wed) Tingwei Meng (UCLA) 

*unusual time

Upcoming talks:

Mar 01, 2023 10:00-10:50 AM PT (Wed)  
Dr. Daniele Venturi (UCSC) 

Title: Numerical approximation of PDEs on tensor manifolds


Abstract: Recently, there has been a growing interest in approximating nonlinear functions and PDEs on tensor manifolds. The reason is simple: tensors can drastically reduce the computational cost of high-dimensional problems when the solution has a low-rank structure. In this talk, I will review recent developments on rank-adaptive algorithms for temporal integration of PDEs on tensor manifolds. Such algorithms combine functional tensor train (FTT) series expansions, operator splitting time integration, and an appropriate criterion to adaptively add or remove tensor modes from the FTT representation of the PDE solution as time integration proceeds. I will also present a new tensor rank reduction method that leverages coordinate flows. The idea is very simple: given a multivariate function, determine a coordinate transformation so that the function in the new coordinate system has a smaller tensor rank. I will restrict the analysis to linear coordinate transformations, which give rise to a new class of functions that we refer to as tensor ridge functions. Numerical applications are presented and discussed for linear and nonlinear advection equations, and for the Fokker-Planck equation. 


Mar 08, 2023 10:00-10:50 AM PT (Wed)  
Dr. Kookjin Lee (Arizona State University) 

Title: Enhanced training algorithms for physics-informed neural networks 


Abstract: In this talk, we present two novel training algorithms for physics-informed neural networks. The first training algorithm tackles the challenges of PINNs training caused by having a multi-term loss objective. With the gradient-based training algorithms, some of the terms may compete, i.e., a gradient for one loss term is in a decreasing direction, while another one is in an increasing direction. To address this issue, we propose a novel training algorithm, which modifies gradients in a way that the gradients for all loss terms are in non-increasing directions. With empirical experiments, we demonstrate that our newly enhanced PINNs result in more accurate predictions, which results  in more accurate extrapolations in time. The second training algorithm extends the PINNs in a learning parameterized-PDEs setting. We propose a lightweight low-rank PINNs containing only hundreds of model parameters and an associated hypernetwork-based meta-learning algorithm, which allows efficient approximation of solutions of PDEs for varying ranges of PDE input parameters.  


Mar 15, 2023 10:00-10:50 AM PT (Wed)  
Dr. Tingwei Meng (University of California, Los Angeles

Title: Overcoming the curse of dimensionality for solving high-dimensional Hamilton-Jacobi partial differential equations using neural networks 

Abstract: Hamilton-Jacobi PDEs and optimal control problems are widely used in many practical problems in control engineering, physics, financial mathematics, and machine learning. For instance, controlling an autonomous system is important in everyday modern life, and it requires a scalable, robust, efficient, and data-driven algorithm for solving optimal control problems. Traditional grid-based numerical methods cannot solve these high-dimensional problems, because they are not scalable and may suffer from the curse of dimensionality. To overcome the curse of dimensionality, we developed several neural network architectures for solving certain classes of high-dimensional Hamilton-Jacobi PDEs. These architectures have solid theoretical guarantees given by the theory of Hamilton-Jacobi PDEs, and they do not require any training process -- the parameters and activation functions are inferred directly from the PDEs. Moreover, by leveraging dedicated efficient hardware designed for neural networks, these methods have the potential for real-time applications in the future. These are joint works with Jerome Darbon, Peter M. Dower, and Gabriel P. Langlois.

Past talks:




Jan 17, 2023, 4:00-4:50 PM PT (Tue)

Dr. Peijie Zhou (University of California, Irvine)


Title: SuBridging Data and Dynamics in Single Cells through Machine Learning


Abstract: The rapid development of single-cell sequencing technologies provides unprecedented resolutions to study the dynamical process of cell-state transitions during development and complex disease. Mathematically, the transitions can be modeled as a (stochastic) dynamical system with a multi-scale structure. In this talk, we will discuss how recent developments in machine learning have allowed us to use dynamical systems techniques to analyze scRNA-seq data. We will introduce the MuTrans algorithm, which uses a low-dimensional dynamical manifold to uncover the underlying attractor basins and transition probabilities in snapshot data. We will also present the scTT (single-cell transition tensor) and spliceJAC algorithms, which use non-equilibrium dynamical systems theory to analyze the stability of attractors within data and identify transition-driving genes in gene expression and splicing processes. Finally, we will discuss our efforts to interpolate non-stationary time-series scRNA-seq data using Wasserstein-Fisher-Rao-metric unbalanced optimal transport and its neural network-based PDE implementations.


Jan 19, 2023, 4:00-4:50 PM PT (Thur)

Dr. Maziar Raissi (University of Colorado Boulder)


Title: Data-Efficient Deep Learning using Physics-Informed Neural Networks


Abstract: A grand challenge with great opportunities is to develop a coherent framework that enables blending conservation laws, physical principles, and/or phenomenological behaviours expressed by differential equations with the vast data sets available in many fields of engineering, science, and technology. At the intersection of probabilistic machine learning, deep learning, and scientific computations, this work is pursuing the overall vision to establish promising new directions for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data. To materialize this vision, this work is exploring two complementary directions: (1) designing data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and non-linear differential equations, to extract patterns from high-dimensional data generated from experiments, and (2) designing novel numerical algorithms that can seamlessly blend equations and noisy multi-fidelity data, infer latent quantities of interest (e.g., the solution to a differential equation), and naturally quantify uncertainty in computations.



Jan 24, 2023, 4:00-4:50 PM PT (Tue)

Dr. Gregory Handy (University of Chicago)


Title: Extending mathematical frameworks to investigate stochastic fluctuations in diverse cell types


Abstract: Stochastic fluctuations drive biological processes from particle diffusion to neuronal spike times. The goal of this talk is to use and extend a variety of mathematical frameworks to understand such fluctuations and derive insight into the corresponding applications. We start by considering n diffusing particles that may leave a bounded domain by either ‘escaping’ through an absorbing boundary or being ‘captured’ by traps that must recharge between captures. We prove that the average number of captured particles grows on the order of log n because of this recharge time, which is drastically different than the linear growth observed for instantaneous recharging. We then examine this process in the limit of large n to investigate the celebrated formula of Berg and Purcell with a modeling framework that uses boundary homogenization to link the diffusion equation to boundary conditions described by nonlinear ordinary differential equations. We end by exploring how the brain leverages interneuron diversity and noisy recurrent connections to assist with cortical computations. Specifically, we utilize linear response theory and mean-field approximations to show how interneurons modulate the level of synchrony in visually induced gamma rhythms.



Jan 26, 2023, 4:00-4:50 PM PT (Thr)

Dr. Shuang Liu (UC San Diego)


Title: Computational moving boundary problems


Abstract: Moving boundary (or often called “free boundary”) problems are ubiquitous in nature and technology. A computational perspective of moving boundary problems can provide insight into the “invisible” properties of complex dynamics systems, advance the design of novel technologies, and improve the understanding of biological and chemical phenomena. However, challenges lie in the numerical study of moving boundary problems. Examples include difficulties in solving PDEs in irregular domains, handling moving boundaries efficiently and accurately, as well as computing efficiency difficulties. In this talk, I will discuss three specific topics of moving boundary problems, with applications to ecology (population dynamics), plasma physics (ITER tokamak machine design), and cell biology (cell movement). In addition, some techniques of scientific computing will be discussed.



Jan 27, 2023, 4:00-4:50 PM PT (Fri)

Dr. Kelsey Gasior (University of Ottawa, Canada)


Title: Untangling small-molecule interactions driving intracellular phase separation


Abstract: An emerging mechanism for intracellular organization is liquid-liquid phase separation (LLPS). Found in both the nucleus and the cytoplasm, liquidlike droplets condense to create compartments that are thought to localize factors, such as RNAs and proteins, and promote biochemical interactions. Many RNA-binding proteins interact with different RNA species to create droplets necessary for cellular functions, such as polarity and nuclear division. Additionally, the proteins that promote phase separation are frequently coupled to multiple RNA binding domains and several RNAs can interact with a single protein, leading to a large number of potential multivalent interactions. We present a multiphase, Cahn-Hilliard diffuse interface model to examine the RNA-protein interactions driving LLPS. Using a ‘start simple, build up’ approach to model construction, we explore how the small-molecule interactions underlying protein-RNA dynamics and RNA species competition control observable, droplet-scale phenomena. Numerical simulations reveal that RNA competition for free protein molecules contributes to intra-droplet patterning and the emergence of a heterogeneous droplet field. More in-depth analysis using layered sensitivity analysis techniques, such as Morris Method screening and Sobol’ method, highlights the complicated relationships these small molecules have with each other and with the results we can measure. Further, our approach is also applicable to other phase separated systems; our model predicted that protein annuli associated with amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) were actually part of an intra-droplet shell/core pattern, which was then confirmed experimentally by our biological collaborators.



Feb 01, 2023, 4:00-4:50 PM PT
Dr. Yiwei Wang (University of California, Riverside)

Title: Energetic variational discretizations and their applications in physics and machine learning


Abstract: Motivated by non-equilibrium thermodynamics, the framework of the energetic variational approach (EnVarA) provides a paradigm for building thermodynamically consistent variational models for many complicated systems in soft matter physics, material science, biology, and machine learning. In this talk, we'll present a numerical framework for developing structure-preserving variational discretizations for these variational models based on their energetic variational forms. The numerical approach starts with the energy-dissipation law, which describes all the physics and the assumptions in each system and can combine distinct types of spatial discretizations, including Eulerian, Lagrangian, particle, and neural-network-based discretizations. The resulting semi-discrete equation inherits the variational structures from the continuous energy-dissipation law. The numerical procedure guarantees the developed scheme is energy stable and preserves the intrinsic physical constraints, such as the conservation of mass and the maximum principle. We'll discuss several applications of this numerical approach, including variational Lagrangian schemes for phase-field models and generalized diffusions, and particle-based energetic variational inference for machine learning. The talk is mainly based on several joint works with Prof. Chun Liu (IIT) and Prof. Lulu Kang (IIT).



Feb 8, 2023 10:00-10:50 AM PT (Wed)  
Dr. Siting Liu (University of California, Los Angeles)

Title: An inverse problem in mean field game from partial boundary measurement


Abstract: In this work, we consider a novel inverse problem in mean-field games (MFG). We aim to recover the MFG model parameters that govern the underlying interactions among the population based on a limited set of noisy partial observations of the population dynamics under the limited aperture. Due to its severe ill-posedness, obtaining a good quality reconstruction is very difficult. Nonetheless, it is vital to recover the model parameters stably and efficiently in order to uncover the underlying causes for population dynamics for practical needs. Our work focuses on the simultaneous recovery of running cost and interaction energy in the MFG equations from a finite number of boundary measurements of population profile and boundary movement. To achieve this goal, we formalize the inverse problem as a constrained optimization problem of a least squares residual functional under suitable norms with L1 regularization. We then develop a fast and robust operator splitting algorithm to solve the optimization using techniques including harmonic extensions, three-operator splitting scheme, and primal-dual hybrid gradient method. Numerical experiments illustrate the effectiveness and robustness of the algorithm. This is a joint work with Yat Tin Chow (UCR), Samy W. Fung (Colorado School of Mines), Levon Nurbekyan (UCLA), and Stanley J. Osher (UCLA).


Feb 15, 2023 10:00-10:50 AM PT (Wed)  
Dr. Yuyuan Ouyang (Clemson University)

Title: Graph Topology Invariant Gradient And Sampling Complexity For Decentralized And Stochastic Optimization


Abstract: One fundamental problem in constrained decentralized multi-agent optimization is the trade-off between gradient/sampling complexity and communication complexity. In this paper we propose new algorithms whose gradient and sampling complexities are graph topology invariant, while their communication complexities remain optimal. All the aforementioned gradient and sampling complexities match the lower complexity bounds for centralized convex smooth optimization and are independent of the network structure. To the best of our knowledge, these gradient and sampling complexities have not been obtained before in the literature of decentralized optimization over a constraint feasible set.


Feb 22, 2023 10:00-10:50 AM PT (Wed)  
Dr. Nathaniel Trask (Sandia National Laboratories)


Title: A data-driven exterior calculus for structure-preserving scientific machine learning


Abstract: As deep learning continues to provide tools for both accelerating the solution of and integrating data into models of physical systems, it has become increasingly important to develop data-driven models which incorporate the types of robustness guarantees taken for granted in traditional modeling and simulation. Numerical stability, accuracy, and preservation of physical structure (e.g. conservation, gauge symmetry, and other invariances) are crucial for deploying machine learned models in high-consequence engineering settings. We present a framework for unsupervised discovery of Whitney forms which parameterize physically-relevant control volume, their geometric boundaries, and associated integral balance laws. Working with generalized fluxes allows enforcement of physics by construction rather than by penalty, providing a simple mathematical analysis and geometric description faithful to underlying invariances. The framework provides a data-driven de Rham complex and finite element exterior calculus ideal for treating multiphysics problems.