January 24th
Title: Synthetic wavefront generation for aero-induced turbulence using boundary layer data
Abstract
Aero-induced turbulence leads to wavefront aberrations which degrade performance in imaging applications. Adaptive optics (AO) provide a method to compensate for wavefront aberrations, potentially mitigating this issue. However, development and testing of AO systems requires wavefront aberration data, which is difficult and expensive to obtain. Further, as a result of the complex statistics exhibited by aero-induced turbulence, existing simulation methods do not generate such data less expensively. We thus introduce ReVAR (Re-Whitened Vector Auto-Regression), a novel statistics-based algorithm for aero-induced wavefront generation. ReVAR trains on an input time-series of spatially and temporally correlated images and then generates synthetic data that captures the statistics present in the input data. The first training step of ReVAR distills the input images to a set of prediction weights and residuals. A further step we call re-whitening uses a spatial principal component analysis (PCA) to convert these residuals to white noise. After training, ReVAR uses a white noise generator and inverts the previous transformation to generate synthetic time-series of data. This algorithm is computationally efficient, able to generate arbitrarily long synthetic time-series, and produces high-quality results when trained on measured turbulent boundary layer (TBL) data. Specifically, the temporal power spectral density (TPSD) of data generated using ReVAR closely matches the TPSD of the measured data.
January 31st
Title: Point cloud deep generative model with self-attention
Abstract
Generative models, such as flow-based and diffusion models, have proven effective for learning a wide range of data, such as images, graphs, and natural language. In this talk, I introduce a flow-matching model enhanced with self-attention for point cloud generation. This model, based on neural ODEs, is straightforward to train (avoiding inference to compute the loss function) and enables accurate generation by using any state-of-the-art ODE solver. Point cloud datasets pose two main challenges: they are invariant under permutations of their points and may vary in size. Self-attention addresses these issues naturally, being both permutation-equivariant and able to handle inputs of arbitrary length. Furthermore, by adjusting the ODE flow control, the trained model can perform conditional sampling and efficiently solve optimization problems over point cloud datasets.
February 7th
Title: On sampling methods for recovering a clamped cavity in a thin plate
Abstract
I will discuss the inverse biharmonic clamped scattering problem of recovering an unknown clamped cavity embedded in a thin elastic plate. In this talk, we will discuss how to extend the linear sampling method (LSM) to inverse shape problems for biharmonic scattering with mutistatic far field measurements. This method is a computationally fast and rigorous way of constructing an imaging test function to recover the scatterer. We will show that the benefit of using far field measurements comes by factoring the biharmonic operator, where the displacement can be modeled by the Helmholtz equation and anti-Helmholtz equation with coupled boundary conditions. Numerical examples will be given to show the effectiveness of the method.
February 14th
Title: Can Deep Networks Overcome the Curse of Dimensionality? A Complexity Perspective
Abstract
Deep artificial neural networks (DNNs) have achieved remarkable success across various scientific and engineering domains. A fundamental question that arises is whether DNNs can overcome the curse of dimensionality. In this talk, I will discuss approximation and generalization analyses from our research that reveal how the complexity of DNNs depends on dimensionality. Through three key examples, I will illustrate that the complexity of DNNs scales exponentially with the intrinsic dimension rather than the ambient space dimension. First, I will explore how DNNs learn low-dimensional manifold structures from high-dimensional data and analyze their complexity. Second, I will examine neural operator learning for dynamical systems and its associated complexity. Third, I will discuss in-context learning with transformers and its complexity characterization. These results provide insights into the power and limitations of deep networks in high-dimensional settings.
February 21st
Title: Agent-based, vertex-based, and continuum modeling of cell behavior in biological patterns
Abstract
Many natural and social phenomena involve individual agents coming together to create group dynamics, whether the agents are drivers in a traffic jam, cells in a developing tissue, or locusts in a swarm. Here I will focus on two examples of such emergent behavior in biology, specifically cell interactions during pattern formation in zebrafish skin and gametophyte development in ferns. Different modeling approaches provide complementary insights into these systems and face different challenges. For example, vertex-based models describe cell shape, while more efficient agent-based models treat cells as particles. Continuum models, which track the evolution of cell densities, are more amenable to analysis, but it is often difficult to relate their few parameters to specific cell interactions. In this talk, I will overview our models of cell behavior in biological patterns and discuss our ongoing work on quantitatively relating different types of models using topological data analysis and data-driven techniques. Stepping out more broadly, I’ll also discuss some of the choices that I made in putting together my slides and presentation story when combining multiple research projects.
February 28th
March 7th
Title: Fast Iterative Solver for Neural Network Method: 1D General Elliptic PDEs
Abstract
In this talk, we study the damped block Newton method (dBN) for general 1D elliptic PDEs. Whereas the finite element method has a well-conditioned mass matrix with O(1), the neural network method has a mass matrix (M) which is more ill conditioned than even the coefficient matrix (A). Theoretically, we prove that the condition number for M is O(n^4). Further, an algorithm to solve M^{-1} with O(n) operations is provided. M^{-1} is used to solve the linear coefficients across both problem types. For the non-linear coefficients, we seek to use a Newton type method. Algorithms for approximating and solving for the inverse of the Hessian will be discussed. Numerical results will be reported.
March 14th
Title: InVAErt networks: A data-driven framework for model synthesis and identifiability analysis
Abstract
Applications of generative modeling and deep learning in physics-based systems have traditionally focused on building emulators, i.e. computational inexpensive approximations of the input-to-output map. However, the remarkable flexibility of data-driven architectures suggests extending their application to other aspects of system analysis, such as model inversion and identifiability. We introduce InVAErt (pronounced "invert") networks, a new framework for the data-driven analysis and synthesis of parametric physical systems. This framework comprises an encoder-decoder pair representing the forward and inverse solution maps, a normalizing flow estimating the probabilistic distribution of system outputs, and a variational encoder which learns a compact latent representation that restores bijectivity between inputs and outputs. We validate our approach through extensive numerical experiments, including simple linear, nonlinear, and periodic maps, dynamical systems, and spatio-temporal PDEs. We finally discuss an application in amortized physiologic inversion for a stiff lumped parameter circulation model with 23 inputs and 15 outputs.
March 28th
April 4th
Title: Fast Direct Solvers for Shallow RELU Neural Network
Abstract
Neural network provides an effective tool for the approximation of some challenging functions. However, fast and accurate solvers for relevant dense linear systems are rarely studied. This work gives a comprehensive characterization of the ill conditioning of some dense linear systems arising from shallow neural network least squares approximations. It shows that the systems are typically very ill conditioned, and the conditioning gets even worse with challenging functions such as those with jumps. This makes the solutions hard for typical iterative solvers. On the other hand, we can further show the existence of some intrinsic rank structures within those matrices, which make it feasible to obtain nearly linear complexity robust direct solutions.
April 11th
Title: Asymptotic Linear Convergence of ADMM for Isotropic TV norm Compressed Sensing
Abstract
The Isotropic Total Variation (TV) norm was introduced for denoising images and has since been used in different problems from image deconvolution to compressed sensing. We will focus on the TV compressed sensing problem in multiple dimensions, as it pertains to MRI Imaging, and analyze the asymptotic convergence rate of the first-order method ADMM. An explicit local linear convergence rate is proven by analyzing the equivalent Douglas-Rachford splitting on a dual problem. Though the proven rate is not sharp; it is close to the observed ones in numerical tests. Numerical verification on 3D problems and real MRI data will be shown and insights on how to choose parameters for the generalized versions of ADMM will be provided.
April 18th
Title: Inferring the efficient number of true signals using high-dimensional summary-level data
Abstract
Linkage disequilibrium score regression (LDSC) has emerged as an essential tool for genetic and genomic analyses of complex traits, utilizing high dimensional data derived from genome-wide association studies (GWAS). LDSC computes the linkage disequilibrium (LD) scores using an external reference panel and integrates the LD scores with only summary data from the original GWAS. In this talk, we will introduce the modeling framework and explain how it is used to compute the effective number of independent signals SNPs.
April 25th
Title: A Simple GPU Implementation for Solving 3D Poisson Type Equations in MATLAB
Abstract
I will first give a step-by-step demonstration of how to use MATLAB to do GPU computations for simple tasks like matrix vector multiplications on our math servers, for which no prior coding knowledge of MATLAB or GPU computation is needed. Then I will demonstrate how it can be used for inverting discrete Laplacian on Cartesian grids by a simple direct Poisson solver with an O(N^{1+1/d}) complexity in d-dimension. Such an implementation is simple but extremely fast in both MATLAB and Python on a modern GPU. On one Nvidia A100 80G card, it takes only 0.8 seconds to invert 3D discrete Laplacian with one billion degree of freedoms for double precision computing in MATLAB. It can be used for high order finite element methods and finite difference solving various problems involved with Laplacian on Nvidia GPU including phase field modelling, image processing, nonlinear schrodinger equation, etc.