Past talks


2021/04/14: Andrea Lincoln (Simons Institute)

posted Apr 16, 2021, 9:32 AM by Alice Bob-Eve   [ updated Apr 16, 2021, 9:27 PM by Alice Bob ]

New Techniques for Proving Fine-Grained Average-Case Hardness

Abstract: In this talk I will cover a new technique for worst-case to average-case reductions. There are two primary concepts introduced in this talk: "factored" problems and a framework for worst-case to average-case fine-grained (WCtoACFG) self reductions. We will define new versions of OV, kSUM and zero-k-clique that are both worst-case and average-case fine-grained hard assuming the core hypotheses of fine-grained complexity. We then use these as a basis for fine-grained hardness and average-case hardness of other problems. Our hard factored problems are also simple enough that we can reduce them to many other problems, e.g. to edit distance, k-LCS and versions of Max-Flow. We further consider counting variants of the factored problems and give WCtoACFG reductions for them for a natural distribution. To show hardness for these factored problems we formalize the framework of [Boix-Adsera et al. 2019] that was used to give a WCtoACFG reduction for counting k-cliques. We define an explicit property of problems such that if a problem has that property one can use the framework on the problem to get a WCtoACFG self reduction. In total these factored problems and the framework together give tight fine-grained average-case hardness for various problems including the counting variant of regular expression matching. Based on joint work with Mina Dalirrooyfard and Virginia Vassilevska Williams.

2021/03/31: Jasper Lee (Brown)

posted Apr 1, 2021, 10:15 PM by Alice Bob-Eve

Optimal Sub-Gaussian Mean Estimation in R

Abstract: We revisit one of the most fundamental problems in statistics: given access to independent samples from a 1D random variable (with finite but unknown mean and variance), what is the best way to estimate the mean, in terms of error convergence with respect to sample size? The conventional wisdom is to use the sample mean as our estimate. However it is known that the sample mean has optimal convergence if and only if the underlying distribution is (sub-)Gaussian. Moreover, it can even be exponentially slower than optimal for certain heavy-tailed distributions. On the other hand, the median-of-means estimator (invented and reinvented in various literature) does have sub-Gaussian convergence for all finite-variance distributions, albeit only in the big-O sense with a sub-optimal multiplicative constant. The natural remaining question then, is whether it is possible to bridge the gap, to have an estimator that has optimal sub-Gaussian concentration with an optimal constant, for all finite-variance distributions. In this talk, we answer the question affirmatively by giving an estimator that converges with the optimal constant inside the big-O, up to a 1+o(1) multiplicative factor. Our estimator is furthermore computable in time linear in the sample size. The convergence analysis involves deriving tail bounds using tools from linear and convex programming, which may be of independent interest. Joint work with Paul Valiant.

2021/03/17: Avishay Tal (UC Berkeley)

posted Mar 17, 2021, 5:00 PM by Alice Bob   [ updated Mar 17, 2021, 5:04 PM ]

Junta Distance Approximation with Sub-Exponential Queries

Abstract: Joint Work with Vishnu Iyer and Michael Whitmeyer

A Boolean function $f:{0,1}^n \to {0,1}$ is called a k-junta if it depends only on k out of the n input bits. Junta testing is the task of distinguishing between k-juntas and functions that are far from k-juntas. A long line of work settled the query complexity of testing k-juntas, which is O(k log(k)) [Blais, STOC 2009; Saglam, FOCS 2018]. Suppose, however, that f is not a perfect k-junta but rather correlated with a k-junta. How well can we estimate this correlation? This question was asked by De, Mossel, and Neeman [FOCS 2019], who gave an algorithm for the task making ~exp(k) queries. We present an improved algorithm that makes ~exp(sqrt{k}) many queries.

Along the way, we also give an algorithm, making poly(k) queries, that provides "implicit oracle access" to the underlying k-junta. Our techniques are Fourier analytical and introduce the notion of "normalized influences" that might be of independent interest. Paper:

2021/03/03: Steve Hanneke (TTIC)

posted Mar 3, 2021, 1:47 PM by Alice Bob-Eve

A Theory of Universal Learning

Abstract: How quickly can a given class of concepts be learned from examples? It is common to measure the performance of a supervised machine learning algorithm by plotting its "learning curve", that is, the decay of the error rate as a function of the number of training examples. However, the classical theoretical framework for understanding learnability, the PAC model of Vapnik-Chervonenkis and Valiant, does not explain the behavior of learning curves: the distribution-free PAC model of learning can only bound the upper envelope of the learning curves over all possible data distributions. This does not match the practice of machine learning, where the data source is typically fixed in any given scenario, while the learner may choose the number of training examples on the basis of factors such as computational resources and desired accuracy.

In this work, we study an alternative learning model that better captures such practical aspects of machine learning, but still gives rise to a complete theory of the learnable in the spirit of the PAC model. More precisely, we consider the problem of universal learning, which aims to understand the performance of learning algorithms on every data distribution, but without requiring uniformity over the distribution. The main result of this work is a remarkable trichotomy: there are only three possible rates of universal learning. More precisely, we show that the learning curves of any given concept class decay either at an exponential, linear, or arbitrarily slow rates. Moreover, each of these cases is completely characterized by appropriate combinatorial parameters, and we exhibit optimal learning algorithms that achieve the best possible rate in each case.

Joint work with Olivier Bousquet, Shay Moran, Ramon van Handel, and Amir Yehudayoff.

2021/02/17: William Hoza (UT Austin)

posted Feb 18, 2021, 9:01 AM by Alice Bob-Eve

Fooling Constant-Depth Threshold Circuits

Abstract: We present the first non-trivial pseudorandom generator (PRG) for linear threshold (LTF) circuits of arbitrary constant depth and super-linear size. This PRG fools circuits with depth d and n^{1 + delta} wires, where delta = exp(-O(d)), using seed length O(n^{1 - delta}) and with error exp(-n^{delta}). This tightly matches the best known lower bounds for this circuit class. As a consequence of our result, all the known hardness for LTF circuits has now effectively been translated into pseudorandomness. This brings the extensive effort in the last decade to construct PRGs and deterministic circuit-analysis algorithms for this class to the point where any subsequent improvement would yield breakthrough lower bounds.
A key ingredient in our construction is a pseudorandom restriction procedure that has tiny failure probability, but simplifies the function to a non-natural "hybrid computational model" that combines decision trees and LTFs. As part of our proof we also construct an "extremely low-error" PRG for the class of functions computable by an arbitrary function of s linear threshold functions that can handle even the extreme setting of parameters s = n/polylog(n) and epsilon = exp(-n/polylog(n)).
Joint work with Pooya Hatami, Avishay Tal, and Roei Tell. ECCC: TR21-002

2020/12/02: Yang Liu (Stanford)

posted Dec 14, 2020, 2:27 PM by Alice Bob   [ updated Jan 31, 2021, 4:47 PM ]

Faster Algorithms for Unit Maximum Flow

Abstract: The maximum flow problem is one of the most well-studied problems in combinatorial optimization, encompassing a broad range of cut, matching, and scheduling problems. Here we present a recent line of work obtaining provably faster algorithms for solving the maximum flow problem using interior point methods. In particular, we show how to solve the maximum flow problem in m-edge unit capacity graphs in time almost m^(4/3), improving over the breakthrough m^(10/7) time algorithm of Mądry.

This is based on joint work with Aaron Sidford (abs/1910.14276

2020/11/25: Sumegha Garg (Harvard)

posted Dec 14, 2020, 1:22 PM by Alice Bob   [ updated Dec 14, 2020, 1:24 PM ]

The Coin Problem with Applications to Data Streams

2020/11/11: Shuai Shao (University of Wisconsin)

posted Nov 24, 2020, 4:17 PM by Alice Bob-Eve   [ updated Dec 14, 2020, 1:19 PM by Alice Bob ]

A Dichotomy for Real Boolean Holant Problems

2020/11/04: Shalev Ben-David (University of Waterloo)

posted Nov 5, 2020, 12:00 PM by Alice Bob-Eve   [ updated Dec 15, 2020, 10:08 AM by Alice Bob ]

Forecasting Algorithms, Minimax Theorems, and Randomized Lower Bounds

2020/10/28: Omar Montasser (TTIC)

posted Oct 29, 2020, 2:07 PM by Alice Bob   [ updated Oct 29, 2020, 2:08 PM ]

Adversarially Robust Learnability: Characterization and Reductions

1-10 of 142