We run weekly seminars to present our research work and/or teach our peers interesting (applied) math and stats!
Talks every Tuesday, 1:30-2:30pm
Wyman N425
Schedule
January 28th
Hamilton Sawczuk
Title: A Symbolic Listing of Hamiltonian Cycles
Abstract: We present two proofs of the below identity for the Hamiltonian cycle polynomial which lists all Hamiltonian cycles of length n. The first is a simple combinatorial argument. The second comes from the "Baltimore school of combinatorics," using partial derivative operators, Tutte's directed matrix tree theorem, and the determinant sum lemma. Emphasis will be placed on background, examples, and clarity of explanation, providing attendees with a gentle introduction to symbolic listings and functional graph theory.
February 4th
Drew Henrichsen
Title: How Much Data is Required for Probabilistic Bounds on Classification Certainty?
Abstract: An important question in classification, including in ML, is how much data is required to construct a classifier? Another important question is "How much certainty is there to the result of a classification mapping?" In general, there are several sources of error or uncertainty in classification results, including error due to limited data, model form error, optimization error, and bias, i.e. training and testing data being drawn from different distributions. Leveraging recent advances in level-set theory, it has become possible to address the error due to limited data by answering the question "How much data is required to establish probabilistic bounds on the inherent uncertainty of mapping results?" This uses the concept of inherent uncertainty as the unremovable uncertainty for a classification mapping. Next, we create an answer to the questions above, by showing both exact bounds, and asymptotic convergence to the inherent uncertainty, in the sources of error related to limited data.
February 11th
Yue Wu
Title: Some Unified Theory for Variance Reduced Prox-Linear Methods
Abstract: We consider the nonconvex, nonsmooth problem of minimizing a composite objective of the form f(g(x)) + h(x) where the inner mapping g is a smooth finite summation or expectation amenable to variance reduction. In such settings, prox-linear methods can enjoy variance-reduced speed-ups despite the existence of nonsmoothness. In this talk, we will present a unified convergence theory applicable to a wide range of common variance-reduced vector and Jacobian constructions. Our theory (i) only requires operator norm bounds on Jacobians (whereas prior works used potentially much larger Frobenius norms), (ii) provides state-of-the-art high probability guarantees, and (iii) allows inexactness in proximal computations.
February 18th
Merrick Ohata
Title: Mapping a Landscape of Networks
Abstract: In this talk, I address some of the challenges of comparing neural networks through both their representations of data (activations) and their learned encodings (weights) in deep networks. I will show that activation covariances of trained networks provide a convenient solution using representational alignment, and that a simple extension to the weights can allow for a meaningful direct comparisons based on their spectral decomposition. Incorporating many models into this approach produces stable "maps" of networks in 2D, which form terrains that we can explore.
February 25th
Kaleigh Rudge
Title: Manipulating the Propagation of Plasmons on a Graphene Sheet
Abstract: In this talk, we discuss the propagation of plasmons on graphene with a space-time perturbation in the Drude weight. The graphene is modeled as a 1-dimensional conductive sheet in a 2-D medium where the EM field obeys Maxwell’s equations. The current density on the sheet is governed by Drude’s law. The system of equations and boundary conditions can be rewritten as a single integro-differential equation. The equation gives the current density on the graphene as a function of the position and time parameters. We discuss methods to obtain a closed-form solution to the integro-differential equation, as well as numerical experiments to present a perturbation analysis for various wave types.
March 4th
Research Updates
Title: Research Updates from 7 PhD Students
Abstract: We had short talks from 7 PhD students giving updates on their research.
March 11th
Ian McPherson
Title: Convergence Rates for Riemannian Proximal Bundle Methods
Abstract: Nonsmooth convex optimization is a classically studied regime with a plethora of different optimization algorithms being developed in order to solve them. Of these methods, proximal bundle methods have been created and used within the Euclidean setting for decades - attempting to mimic the dynamics of the proximal point method. While practitioners have enjoyed very robust convergence results with respect to choice of parameters, it was not unitl the late 2020s that we have had theoretical results giving non-asymptotic guarantees - recovering optimal convergence rates. Just within 2024, the first Riemannian Proximal Bundle Methods have been proposed, again lacking non-asymptotic guarantees. Within this talk, we discuss how we are able to lift the non-asymptotic rates to the Riemannian setting without access to exponential maps or parallel transports, while also recovering results given inexact subgradient oracles in the Euclidean setting as a special case. In addition, to our knowledge these are the first theoretical guarantees for non-smooth geodesically convex optimization in the Riemannian setting, that also enjoy guarantees of bounded iterates (and thus guaranteed convergence). The work presented is joint work with Mateo Diaz and Benjamin Grimmer.
March 18th
Spring Break!
March 25th
Tianyi Chen
Title: Localizing 1st order changepoint in Time series networks with vertex misalignment
Abstract: Identifying change points in time series of graphs (TSGs) is critical for understanding the evolving structure of complex networks. Euclidean Mirror is a recently developed method that effectively captures the dynamics of TSGs, and in this work, we focus on localizing first-order changepoints in TSG using this approach. A key question we address is: what happens when the vertex alignment between graphs is lost? To explore this, we introduce two extreme models: in the London model, the changepoint t^* remains robust despite misalignment; in the Atlanta model, true vertex alignment is essential for detecting t^*. We also examine whether graph matching can recover the true alignment and aid in changepoint localization. The answer is, while, complicated.
April 1st
George Kevrekidis
Title: What is.... a manifold?
Abstract: Manifolds are mathematical objects that appear incredibly often (sometimes as a guest star) in many fields of applied mathematics. However, because of that, details about them are often in an appendix or omitted entirely. This short presentation will be a friendly introduction to what manifolds are: a quick description to define one, what different types there are and why they might be useful, and where they most commonly show up in ML and AI. No background will be necessary to follow this talk as it is meant to be a non-technical introduction (well, maybe a little linear algebra)
April 8th
Sichong Zhang
Title: Minimax rates for learning kernels in operators
Abstract: Learning kernels in operators from data lies at the intersection of inverse problems and statistical learning, offering a powerful framework for capturing nonlocal dependency in function spaces and high-dimensional settings. In contrast to classical nonparametric regression, where the inverse problem is well-posed, kernel estimation involves a compact normal operator and an ill-posed deconvolution. To address these challenges, we introduce adaptive spectral Sobolev spaces, unifying Sobolev spaces and reproducing kernel Hilbert spaces, that automatically discard non-identifiable components and control terms with small eigenvalues. Within this framework, we establish the minimax convergence rates for the mean squared error under both polynomial and exponential spectral decay regimes. Methodologically, we develop a tamed least squares estimator achieving the minimax upper rates via controlling the left-tail probability for eigenvalues of the random normal matrix; and for the minimax lower rates, we resolve challenges from infinite-dimensional measures through their projections.
April 15th
Beatrix Wen
Title: Simple GNN Explanations with Minimal Necessary Sets
Abstract: Explaining the decisions made by graph neural networks (GNNs) is crucial, particularly in high-stakes decision-making scenarios. A common approach is to identify the key substructures within a graph that influence the model's output. While several popular methods address this challenge, they mostly rely on complex engineering to manage the inherent computational complexity involved. We propose an intuitive approach to measure node importance based on the count of minimal necessary sets of nodes it belongs to. A necessary set of nodes is one whose removal from the graph changes the GNN’s classification. This concept enables the development of a simple and intuitive algorithm for assessing node importance. Although the algorithm has an exponential worst-case runtime—unavoidable due to complexity-theoretical constraints—it is efficient in practice when the minimal necessary sets are relatively small. Our empirical evaluation demonstrates that the proposed method efficiently produces results consistent with state-of-the-art explainers across various benchmarks.
April 22nd
Matthew Hudes
Title: A Functional Renormalization Group Approach to Spontaneous Stochasticity
Abstract: This talk explores the application of the Functional Renormalization Group (FRG) to study spontaneous stochasticity. We begin by discussing the Donsker-Schilder large deviation problem of Brownian motion in the zero-diffusion limit where the FRG flow is solvable. We then discuss application of FRG to study spontaneous stochasticity in a 1D toy model driven by a fractional Brownian motion process. If time permits, we will discuss our Bayesian interpretation of the Wetterich-Morris FRG flow equations.
Organizers: Matthew Hudes & Kaleigh Rudge
Food coordinators: Jiayue (Zoe) Zhang & Michelle Dai