Time and Date: Tuesdays 3:30 - 4:30
Unless otherwise noted, all talks will take place in Math Sciences Building 110 at the University of Missouri.
Organized by Tim Duff and Dan Edidin. Contact Tim if you want to be on the mailing list.
Title: Convex programming relaxations for high-dimensional Fokker-Planck equation
Abstract: In this talk, we explore adaptations of semidefinite programming relaxations for solving PDE problems. Our approach transforms a high-dimensional PDE problem into a convex optimization problem, setting it apart from traditional non-convex methods that rely on nonlinear re-parameterizations of the solution. In the context of statistical mechanics, we demonstrate how a mean-field type solution for an interacting particle Fokker-Planck equation can be provably recovered without resorting to non-convex optimization.
Abstract: A frame (x_j) for a Hilbert space H allows for a linear and stable reconstruction of any vector x in H from the linear measurements (<x,x_j>). However, there are many situations where some information of the frame coefficients is lost. In applications such as signal processing and electrical engineering one often uses sensors with a limited effective range and any measurement above that range is registered as the maximum. Depending on the context, recovering a vector from such measurements is called either declipping or saturation recovery. We will discuss a frame theoretic approach to this problem in a similar way to what Balan, Casazza, and Edidin did for phase retrieval. The talk is based on joint work with W. Alharbi, D. Ghoreishi, B. Johnson, and N. Randrianarivony.
Title: Flatland Vision
Abstract: When is it possible to project two sets of labeled points lying in a pair of projective planes to the same points on a projective line? Here is one answer: such projections exist if and only if the two 2D point sets are themselves images of a common point set in 3D projective space. Furthermore, when the two sets of points are in general position, it is possible to give a complete description of the loci of pairs of projection centers. I will describe the roles of classical invariant theory, Cremona transformations, and geometric computer vision in this description. Based on joint work with Sameer Agarwal, Erin Connelly, Annalisa Crannell, and Rekha Thomas.
Title: A complete error analysis on solving an overdetermined system in computer vision using linear algebra
Abstract: Many problems in computer vision are represented using a parametrized overdetermined system of polynomials which must be solved quickly and efficiently. Classical methods for solving these systems involve specialized solvers based on Groebner basis techniques or utilize randomization in order to create well-constrained systems for numerical techniques. We propose new methods in numerical linear algebra for solving such overdetermined polynomial systems and provide a complete error analysis showing that the numerical approach is stable. Examples will be provided to show the efficacy of the method and how the error in the data affects the error in the solution.
Title: The geometry of economic fragility for supply chain shocks
Abstract: The study of fragile economic systems is important in identifying systems that are vulnerable to a dramatic collapse. For instance, complex systems like supply chains are at risk of being fragile because they require many parts to work well simultaneously. Even when each individual firm has a small susceptibility to a shock, the global system may still be at great risk. A recent survey by Matthew Elliot and Ben Golub review fragile economic systems from the point of view of networks. In a network, the reliability that the final product (e.g., a car, computer, or lifesaving medication) is made by a firm is determined the probabilities of shocks being in the system. Thus, reliability transitions from being zero to a positive probability depending on the chances of a shock --- characterizing these phase transitions is an important problem in the theory of economic fragility. In our work, we view these phase transitions through the algebraic geometry lens by using resultants. As a result, we bring new tools to econometrics to analyze multi-parameter models, and we fully describe the reliability of many new network models using computational algebraic geometry. Our most significant application is a surprising case study on a mixture of two multi-parameter supply chain models. This is joint work with Jiayi Li (UCLA).
Title: Characterizing single-cell transcriptomic spatial patterns with Topological Data Analysis
Abstract: To gain their unique biological function, plant cells regulate protein biosynthesis through gene activation and repression along with multiple mRNA mechanisms. The subcellular localization of mRNAs has been reported as a complementary regulatory mechanism of the biology of fungi, yeast, and animal cells. However, studies comprehensively reporting the impact of mRNA localization in plant cells are lacking.
Here, we set to mathematically model the spatial distribution of sub-cellular cytosolic transcripts across multiple cell types and developmental stages. Through the use of high-resolution spatial transcriptomic technology, we first report the comprehensive and differential mapping of millions of plant transcripts between the nuclear and cytoplasmic compartments of various soybean nodule cell types. We then characterize key mathematical features of these transcriptomic spatial distributions using Topological Data Analysis (TDA). TDA offers a comprehensive pattern-quantifying framework that is robust to variations in cell shape, size, and orientation. TDA thus provides us with a common ground to mathematically compare and contrast intrinsic differences in sub-cellular transcript distributions and patterns across cell types and expressed genes.
Our analyses reveal distinct patterns and spatial distributions of plant transcripts between the nucleus and cytoplasm, varying both between and within genes, as well as across different cell types. We believe this differential distribution is an additional, less understood, regulatory mechanism controlling protein translation and localization, cell identity, and cell state and reveals the influence of the sub-compartmentalization of transcripts as another post-transcriptional regulatory mechanism.
Abstract: Modern complex systems often involve multiple interacting agents in a shared environment, e.g., transportation systems, power systems, swarm robotics, and human-robot interactions. Controlling these multi-agent systems (MASs) requires the characterization of agents’ interactions to account for their interdependent self-interests and coupled agents’ constraints such as collision avoidance and/or limited shared resources. To enable interaction awareness and human-like reasoning processes, game-theoretic control has been explored in the recent development of autonomous systems operating in multi-agent environments. However, fundamental challenges, including solution existence, algorithm convergence, scalability, and incomplete information, still remain to be addressed before the game-theoretic approaches could be sufficiently practical to be employed in a broad range of autonomous system applications. Possible solutions to addressing these challenges will be discussed in this talk, using autonomous driving as an application example.
Abstract: Group synchronization is a mathematical framework used in a variety of applications, such as computer vision, to situate a set of objects given their pairwise relative positions and orientations subjected to noise. More formally, synchronization estimates a set of group elements given some of their noisy pairwise ratios. In this talk I will present an entirely new view on the task of group synchronization by considering the natural higher-order structures that relate the relative orientations of triples or n-wise sets of objects. Examples of these structures include triples of so-called ‘common lines’ in cryo-EM and trifocal tensors in multi-view geometry. Thus far very little mathematical or computational work has explored synchronizing these higher-order measurements. I will introduce the problem of higher-order group synchronization and discuss the formal foundations of synchronizability in this setting. Then I will present a message passing algorithm to solve the problem and compare its performance to classical pairwise synchronization algorithms.
Abstract: Abstract: Diffusion model is an emerging generative modeling technique, achieving the state-of-the-art performances in image and video synthesis, scientific simulation, inverse problems, and offline reinforcement learning. Yet, existing statistical analysis of diffusion models often requires restrictive theoretical assumptions or is suboptimal. In this talk, we present two recent works from our group toward closing these gaps between diffusion models and the theoretical limits in standard nonparametric and high-dimensional statistical settings, and discuss some future directions.
1) For subGaussian distributions on ℝ^d with β-Hölder smooth densities (β≤2), we show that the sampling distribution of diffusion models can be minimax optimal under the total variation distance, even if the score is learned from noise perturbed training samples with noise multiplicity equal to one and without density lower bound assumptions.
2) For conditional sampling under i.i.d. priors and noisy linear observations, we show that diffusion models (also known as stochastic localization) can successfully sample from the posterior distribution, provided the signal-to-noise ratio exceeds a computational threshold predicted by prior work on approximate message passing (Barbier et al., 2020). This improves previous thresholds established in the stochastic localization literature, and enhances the sampling accuracy of dominant noisy inverse sampling techniques used in machine learning - albeit in a stylized theoretical model.
(Based on works with Kaihong Zhang, Heqi Yin, Feng Liang arXiv: 2402.15602, and with Han Cui, and Zhiyuan Yu arXiv: 2407.10763)
Title: Elucidating Flow Matching ODE Dynamics with Respect to Data Geometries
Abstract: Diffusion-based generative models have become the standard for image generation. ODE-based samplers and flow matching models improve efficiency, in comparison to diffusion models, by reducing sampling steps through learned vector fields. However, the theoretical foundations of flow matching models remain limited, particularly regarding the convergence of individual sample trajectories at terminal time - a critical property that impacts sample quality and being critical assumption for models like the consistency model. In this paper, we advance the theory of flow matching models through a comprehensive analysis of sample trajectories, centered on the denoiser that drives ODE dynamics. We establish the existence, uniqueness and convergence of ODE trajectories at terminal time, ensuring stable sampling outcomes under minimal assumptions. Our analysis reveals how trajectories evolve from capturing global data features to local structures, providing the geometric characterization of per-sample behavior in flow matching models. We also explain the memorization phenomenon in diffusion-based training through our terminal time analysis. These findings bridge critical gaps in understanding flow matching models, with practical implications for sampling stability and model design.