The goal of this seminar is to share both novel and classic ingenious ideas in TCS, broadly speaking. Potential topics include (but not limited to) algorithms, complexity, learning theory, coding theory, game theory, and cryptography. Subscribe to our mailing list if you want to receive updates.
Everyone is welcome to join. Of course, you are more than welcome to give a theory talk! Please send us an email to:
Wei-Kai Lin: wklin (at) virginia (dot) edu
Chen-Yu Wei: chenyu (dot) wei (at) virginia (dot) edu
Matheus Ferreira: matheus (at) virginia (dot) edu
We are meeting on Fridays at noon in Spring 2026. See the Google calendar below.
Abstract:
Consumers often spend costly effort evaluating multiple products before buying. This paper studies how search costs shape market outcomes when firms can pay to become more prominent. I combine a directed-search model with a lab experiment to measure welfare effects of search frictions, position allocation, and price transparency.
In the model, two ex ante identical firms bid for one prominent slot in a second-price auction. Given the resulting positions, firms then post prices observed by the consumer. The consumer first inspects the prominent firm for free and learns her match value. The match value captures product fit: she either likes a product or does not. She then decides whether to pay a sunk cost to inspect the non-prominent firm and learn its (initially hidden) match value.
The pricing game delivers an asymmetric mixed-strategy equilibrium. Two key predictions are: (i) the prominent firm prices higher in the sense of first-order stochastic dominance; and (ii) the non-prominent firm’s pricing distribution has an atom at its lower bound (“fire-sale”). The lab tests these predictions under different search costs, price transparency, and position-allocation rules (auction vs. random). The results speak to paid prominence and price disclosure on search engines and platforms, and to the efficiency of sponsored-search auctions with endogenous slot valuations, such as Google’s Generalized Second Price (GSP) auction.
-----
Presenter Info:
Zitian is a 5th year PhD in the Econ department working in behavioral game theory and experiments. His previous experiment studies strategic information transmission in a coordination game with uncertainty. He is currently working on designing the search experiment. Students and faculties of all levels are welcomed to participate in several paid experiments by requesting a Veconlab account here: https://uva-veconlab.sona-systems.com/Default.aspx?ReturnUrl=/
Title: Rank bounds and Polynomial Identity Testing
Abstract: Polynomial Identity Testing (PIT) is the problem of checking whether a given algebraic circuit computes the zero polynomial. The PIT problem has a myriad of applications, such as algorithms for the perfect matching problem, primality testing, and learning algorithms for sparse polynomials. While there are efficient randomized algorithms for PIT, there is no deterministic poly-time algorithm for general circuits. Derandomizing PIT is a foundational problem in theoretical computer science, as it is also intrinsically related to lower bounds for algebraic circuits and the VP vs. VNP problem.
In this talk, we will discuss the PIT problem for depth-4 circuits. I’ll talk about recent progress on proving rank bounds for depth-4 identities, and the first deterministic poly-time algorithm for depth-4 circuits with top fan-in 3 and constant bottom fan-in. We will discuss algebraic-geometric ideas that lead to rank bounds and non-linear generalizations of classical results from combinatorics.
This talk is based on joint works with Abhibhav Garg and Rafael Oliveira.
Title: Memory Requirements of Unlearning
Abstract: The concept of machine unlearning has emerged as an important paradigm both for AI safety as well as to accommodate data protection laws that aim to give users more control over how their data is used. Briefly, this concept aims to provide an information-theoretic guarantee that post an unlearning request of a data point, the trained ML model behaves as if the deleted data were never used. In this talk, we will study the memory complexity of machine unlearning algorithms. Particularly, we ask how many bits of storage are needed to be able to delete certain training samples at a later time. We will focus on the task of realizability testing, where the goal is to check whether the remaining training samples are realizable within a given hypothesis class (H). We will show that for any hypothesis class H, the amount of information that the learner needs to store, so as to perform unlearning later, is lower bounded by the Eluder dimension of H, a combinatorial notion that can be much larger than the VC dimension.
Bio: Sumegha Garg is an Assistant Professor in the Department of Computer Science at Rutgers University. Prior to joining Rutgers, she was a postdoctoral fellow in the CS Department at Stanford University and a Rabin Postdoctoral Fellow in the Theory of Computation group at Harvard University. She completed her Ph.D. in Computer Science at Princeton University, where she was advised by Mark Braverman. Her research interests span complexity theory, information theory, and learning theory, with a particular emphasis on memory lower bounds and the theory of responsible machine learning.
Abstract: Estimating the score function—or other population-density-dependent functions —is a fundamental component of most generative models. However, such function estimation is computationally and statistically challenging. Can we avoid function estimation for data generation? We propose an estimation-free generative method: A set of points whose locations are deterministically updated with (inverse) gradient descent can transport a uniform distribution to arbitrary data distribution, in the mean field regime, without function estimation, training neural networks, and even noise injection. The proposed method is built upon recent advances in the physics of interacting particles. Leveraging recent advances in mathematical physics, we prove that the proposed method samples from the true underlying data distribution in the asymptotic regime.
Bio: I am an Assistant Professor of Computer Science at the University of Virginia. My research focuses on the theoretical foundations of machine learning, with an emphasis on theoretical analysis for explainability and reliability of generative AI using tools from probability theory, applied mathematics, and mathematical physics. Prior to joining UVA, I was a FODSI postdoctoral researcher, hosted by MIT and Boston University. Before that, I served as a postdoctoral researcher at Princeton University as well as INRIA Paris. I completed my PhD in computer science at ETH Zurich in 2020. I had the privilege of being advised early in my research training by Professors Francis Bach and Thomas Hofmann.
The older talks (prior to 2025) were posted here.