The goal of this seminar is to share both novel and classic ingenious ideas in TCS, broadly speaking. Potential topics include (but not limited to) algorithms, complexity, learning theory, coding theory, game theory, and cryptography. Subscribe to our mailing list if you want to receive updates.
Everyone is welcome to join. Of course, you are more than welcome to give a theory talk! Please send us an email to:
Wei-Kai Lin: wklin (at) virginia (dot) edu
Chen-Yu Wei: chenyu (dot) wei (at) virginia (dot) edu
Matheus Ferreira: matheus (at) virginia (dot) edu
We are meeting on Fridays at noon in Fall 2025. See the Google calendar below.
Title: Memory Requirements of Unlearning
Abstract: The concept of machine unlearning has emerged as an important paradigm both for AI safety as well as to accommodate data protection laws that aim to give users more control over how their data is used. Briefly, this concept aims to provide an information-theoretic guarantee that post an unlearning request of a data point, the trained ML model behaves as if the deleted data were never used. In this talk, we will study the memory complexity of machine unlearning algorithms. Particularly, we ask how many bits of storage are needed to be able to delete certain training samples at a later time. We will focus on the task of realizability testing, where the goal is to check whether the remaining training samples are realizable within a given hypothesis class (H). We will show that for any hypothesis class H, the amount of information that the learner needs to store, so as to perform unlearning later, is lower bounded by the Eluder dimension of H, a combinatorial notion that can be much larger than the VC dimension.
Bio: Sumegha Garg is an Assistant Professor in the Department of Computer Science at Rutgers University. Prior to joining Rutgers, she was a postdoctoral fellow in the CS Department at Stanford University and a Rabin Postdoctoral Fellow in the Theory of Computation group at Harvard University. She completed her Ph.D. in Computer Science at Princeton University, where she was advised by Mark Braverman. Her research interests span complexity theory, information theory, and learning theory, with a particular emphasis on memory lower bounds and the theory of responsible machine learning.
Abstract: Estimating the score function—or other population-density-dependent functions —is a fundamental component of most generative models. However, such function estimation is computationally and statistically challenging. Can we avoid function estimation for data generation? We propose an estimation-free generative method: A set of points whose locations are deterministically updated with (inverse) gradient descent can transport a uniform distribution to arbitrary data distribution, in the mean field regime, without function estimation, training neural networks, and even noise injection. The proposed method is built upon recent advances in the physics of interacting particles. Leveraging recent advances in mathematical physics, we prove that the proposed method samples from the true underlying data distribution in the asymptotic regime.
Bio: I am an Assistant Professor of Computer Science at the University of Virginia. My research focuses on the theoretical foundations of machine learning, with an emphasis on theoretical analysis for explainability and reliability of generative AI using tools from probability theory, applied mathematics, and mathematical physics. Prior to joining UVA, I was a FODSI postdoctoral researcher, hosted by MIT and Boston University. Before that, I served as a postdoctoral researcher at Princeton University as well as INRIA Paris. I completed my PhD in computer science at ETH Zurich in 2020. I had the privilege of being advised early in my research training by Professors Francis Bach and Thomas Hofmann.
The older talks (prior to 2025) were posted here.