Past Talks:
Oct 9th, 2024, 10:00 AM - 10:50 AM PT (zoom)
Lara Kassab (University of California, Los Angeles)
Title: Stochastic Iterative Methods for Online Rank Aggregation from Pairwise Comparisons
Abstract: In this talk, we consider large-scale ranking problems where one is given a set of pairwise comparisons, and the underlying ranking explained by those comparisons is desired. We show that stochastic gradient descent approaches, particularly the Kaczmarz method, can be leveraged to offer convergence to a solution that reveals the underlying ranking while requiring low-memory operations. We introduce several variations of this approach that offer a tradeoff in speed and convergence when the pairwise comparisons are noisy. We prove theoretical results for convergence almost surely and study several regimes including those with full, partial, and noisy observations. Our empirical results give insights into the number of observations required as well as how much noise in those measurements can be tolerated.
Oct 16th, 2024, 10:00 AM - 10:50 AM PT (zoom)
Jeremy Wu (University of California, Los Angeles)
Title: Mean Field Limit for Congestion Dynamics in One Dimension
Abstract: In this talk, I will present recent joint work with Inwon Kim and Antoine Mellet in which we derive a model for congested transport (a PDE at a macroscopic scale) from particle dynamics (a system of ODEs at the microscopic scale). Such PDEs appear very naturally in the description of crowd motion, tumour growth, and general aggregation phenomena. We begin with a system where the particle trajectories evolve according to a gradient flow constrained to some finite distance of separation from each other. This constraint leads to a Lagrange multiplier which, in the mean field limit (infinite number of particles), generates a pressure variable to enforce the hard-congestion constraint. Our results are confined to one spatial dimension wherein we rely on both the Eulerian and Lagrangian perspectives for the continuum limit.
Oct 23th, 2024, 10:00 AM - 10:50 AM PT (zoom)
Junyuan Lin (Loyola Marymount University)
Title: Solving Graph Laplacian via Multilevel Sparsifiers
Abstract: We present effective preconditioners for solving Laplacians of general weighted graphs. While spectral sparsifiers (SSs) offer optimal computational complexity, they are difficult to implement in practice. On the other hand, multigrid methods are efficient but lack strong theoretical foundations. To bridge this gap, we combine ideas from both approaches and propose preconditioners with practical usability and theoretical guarantees. By expanding the original graph into a multilevel structure, we create an equivalent low-diameter graph, a property useful for SSs, while eliminating negative edge weights. This results in a positively weighted expanded graph preconditioner (PEGP) that simplifies SS construction. We introduce the multilevel graph sparsifier preconditioner (MGSP), a deterministic SS that is easy to construct. Preliminary numerical experiments confirm the effectiveness of PEGP and MGSP in practical applications.
Oct 30th, 2024, 10:00 AM - 10:50 AM PT (zoom)
Fuqun Han/Minxin Zhang (University of California, Los Angeles)
Title: A Tensor Train Based Method for Zeroth-Order Global Optimization
Abstract: In this work, we address the zeroth-order global optimization of a nonconvex function defined on a bounded domain, assuming the existence of a unique global minimizer while allowing for multiple local minimizers. We introduce a Gibbs measure, parameterized by a scalar, that approximates the Dirac measure centered at the global minimizer, and prove that the expected value of a random variable under this parameterized Gibbs measure converges to the global minimizer, with the corresponding convergence rate established. We then propose a tensor train (TT) based method for efficiently estimating the global minimizer and analyzing the numerical error. Additionally, we develop an inexact proximal point method, TT-IPP, which leverages the TT approximation to compute the proximal operator inexactly at each iteration, with an adaptively decreasing parameter. The global convergence of TT-IPP is established, and its effectiveness is demonstrated through various numerical experiments, which show the method's promise in addressing high-dimensional global optimization problems for nonconvex functions.
Nov 6th, 2024, 10:00 AM - 10:50 AM PT (zoom)
Sarah Tymochko (University of California, Los Angeles)
Title: Applications of Topological Data Analysis to Resource Coverage
Abstract: Ideally, all public resources (e.g. parks, grocery stores, hospitals, etc.) should be distributed in a way that is fair and equitable to everyone. However, this is not always the case. Quantifying how much (or little) access individuals have to certain resources is a complex problem. Previous work has shown that tools from topological data analysis (TDA) can be useful in this setting. TDA is a field with tools to quantify the shape of data in a manner that is concise and robust using concepts from algebraic topology. Hickok et al. developed an approach to determine "holes" in the locations of resource locations based on geographic locations and travel times. Some resources may necessitate incorporation a notion of quality. As a case study, we look at public parks, which are heterogeneous in many ways. Having access to a park that is hundreds of acres with basketball courts, baseball diamonds, and an aquarium is inherently different than having access to a small patch of grass with an overgrown tennis court. Here we present an exploration of the access to public parks in Chicago using persistent homology, a tool from TDA. Our goal is to identify not only who lacks access to parks, but also who lacks access to quality parks.
Nov 13th, 2024, 10:00 AM - 10:50 AM PT (zoom)
Angxiu Ni (University of California, Irvine)
Title: On the triad program for Monte-Carlo differentiation of chaos
Abstract: Computing the linear response, or the derivative of long-time-averaged observables with respect to system parameters, is a central problem for many applications. Conventionally, there are three linear response formulas: the path-perturbation formula (including the backpropagation method in machine learning), the divergence formula, and the probability-kernel-differentiation formula. But none works for the general case, which is chaotic, high-dimensional, and small-noise. Then we present our fast response formula expressed by a pointwisely defined function. Hence, people can compute the linear response by Monte-Carlo-type method, that is, compute the average of some function over an orbit. The fast response formula overcomes all three difficulties under hyperbolicity assumptions. Then we discuss how to further incorporate kernel-differentiation to overcome non-hyperbolicity.
Nov 20th, 2024, 10:00 AM - 10:50 AM PT (in person)
Yuhua Zhu (University of California, Los Angeles)
Title: A PDE-based model-free algorithm for Continuous-time Reinforcement Learning
Abstract: This talk addresses the problem of continuous-time reinforcement learning (RL). When the underlying dynamics remain unknown and only discrete-time observations are available, how can we effectively conduct policy evaluation and policy iteration? We first highlight that while model-free RL algorithms are straightforward to implement, they are often not a reliable approximation of the true value function. On the other hand, model-based PDE approaches are more accurate, but the inverse problem is not easy to solve. To bridge this gap, we introduce a new Bellman equation, PhiBE, which integrates discrete-time information into a PDE formulation. PhiBE allows us to skip the identification of the dynamics and directly evaluate the value function using discrete-time data. Additionally, it offers a more accurate approximation of the true value function, especially in scenarios where the underlying dynamics change slowly. Moreover, we extend PhiBE to higher orders, providing increasingly accurate approximations.
Oct 27th, 2024, 10:00 AM - 10:50 AM PT (zoom)
Siting Liu (University of California, Riverside)
Title: In-Context Learning for Differential Equations
Abstract: This talk presents In-Context Operator Networks (ICON), a novel neural-network-based framework designed to learn and apply operators directly from prompted data during inference, without requiring any weight updates. Traditional methods rely on neural networks to approximate solutions for specific equations or operators, necessitating retraining or fine-tuning for new problems. In contrast, ICON trains a single neural network to serve as a general operator learner, enabling it to adapt to new problems with minimal effort. By leveraging shared structures across operators, ICON requires only a few demonstration examples in the prompt to learn a new operator effectively. Our results demonstrate ICON's ability as a few-shot operator learner across a diverse range of differential equation problems, including forward and inverse tasks for ordinary differential equations (ODEs), partial differential equations (PDEs), and mean-field control (MFC) problems. Furthermore, we highlight ICON's generalization capabilities, showcasing its potential as a powerful tool for solving complex operator learning challenges in scientific computing and beyond. This is a joint work with Liu Yang (NUS), Tingwei Meng (UCLA), and Stanley Osher (UCLA).