Michigan-Purdue Theory Seminar
Winter 2021

Date: Friday's, 10:00 AM EST, at the following Zoom meeting room: purdue-edu.zoom.us/j/96106431696

Organizers: Greg Bodwin (bodwin@umich.edu), Euiwoong Lee (euiwoong@umich.edu ), Kent Quanrud (krq@purdue.edu), and Simina Brânzei (simina@purdue.edu). Please reach out to any of the organizers if you would like to give a talk!

iCal link

January 15: Nicole Wein: Approximating the Diameter of a Graph

Abstract:

I will talk about algorithms and conditional lower bounds for approximating the diameter of a graph (the largest distance between two vertices). The best known algorithm for exactly computing the diameter of a graph is the naive algorithm of computing all-pairs shortest paths (APSP) and taking the largest distance. Furthermore, no significantly better algorithm exists under the Strong Exponential Time Hypothesis. Running APSP can be prohibitively slow for very large graphs, so we seek much faster algorithms that produce an approximation of the diameter. I will discuss the landscape of time vs. accuracy trade-off algorithms and hardness for diameter.

January 22: Karem Sakallah: Accidental Research: Scalable Algorithms for Graph Automorphism and Canonical Labeling

Abstract: Accidental research is when you’re an expert in some domain and seek to solve problem A in that domain. You soon discover that to solve A you need to also solve B which, however, comes from a domain in which you have little, or even no, expertise. You, thus, explore existing solutions to B but are disappointed to find out that they just aren’t up to the task of solving A. Your options at this point are a) to abandon this futile project, or b) to try and find a solution to B that will help you solve A. While this might seem like a fool’s errand, you have the advantage over B experts of being unencumbered by their experience. You are a novice who does not, yet, appreciate the complexity of B, but are able to explore it from a fresh perspective. You also bring along expertise from your own domain to connect what you know with what you hope to learn. If you’re lucky, you may succeed in finding a solution to B that helps you solve A.


This scenario played out in two cases: developing the GRASP conflict-driven clause-learning (CDCL) SAT solver in the context of performing timing analysis of very large scale integrated (VLSI) circuits, and developing the saucy graph automorphism program to find and break symmetries in large SAT problems. Ironically, in both cases solving problem B (GRASP, saucy) turned out to be much more impactful than solving problem A (timing analysis, breaking symmetries.) Without the trigger of problem A, however, neither GRASP nor saucy would have been conceived.


In this talk I will trace the development of the saucy graph automorphism program. Breaking the symmetries of large SAT instances (millions of variables and clauses) was conjectured to be helpful in further scaling modern CDCL SAT solvers. Finding those symmetries, however, was a bottleneck since the sizes of the graphs encoding SAT instances were beyond the capabilities of existing automorphism algorithms. We developed saucy in stages as our understanding of classical graph automorphism algorithms evolved. We slowly learned that while these algorithms did produce a generator set for a garph's symmetry group, they were in fact primarily concerned with deriving a canonical labeling of the graph. Since our goal was simply just finding an irredundant generator set, we replaced the canonical labeling search tree in these algorithms with a permutation search tree based on an ordered partition pair (OPP) data structure that implicitly encodes sets of vertex permutations. This change re-cast the problem as a direct search for vertex permutations that preserve a graph's edge relation and enabled two powerful OPP pruning mechanisms allowing saucy to outperform standard algorithms by several orders of magnitude.


At this point we became intrigued by the possibility of significantly speeding up canonical labeling (a problem we were not really looking to solve!) by integrating saucy with an off-the-shelf canonical labeler. Specifically, by passing the orbits of the stabilizer subgroups from saucy to the state-of-the-art bliss canonical labeler we empirically confirmed that this saucy-bliss combination is almost always faster, sometimes by orders of magnitude, than bliss alone on several large graph families.

January 28 (Thursday): Ohad Trabelsi: New algorithms and lower bounds for all-pairs max flow

Abstract: When can maximum flow be solved for all pairs of nodes faster than naively solving it separately for each pair? We will overview new algorithms - including a very r ecent result!, and also lower bounds (under some popular assumptions).

February 4 (Thursday): Vedat Levi Alev: Improved Analysis of Higher Order Random Walks

Abstract: Local spectral expansion is a very useful method for arguing about the spectral properties of several random walk matrices over simplicial complexes. Our main result is a sharp upper bound on the second eigenvalue of the down-up walk on a pure simplicial complex, in terms of the second eigenvalues of its links. We discuss some applications of this result in analyzing mixing times of Markov chains which include some recent breakthroughs in sampling problems.

(joint work with Lap Chi Lau, based on the paper: arxiv.org/abs/2001.02827)

February 12: William Hoza: Fooling Constant-Depth Threshold Circuits

Abstract: We present the first non-trivial pseudorandom generator (PRG) for linear threshold (LTF) circuits of arbitrary constant depth and super-linear size. This PRG fools circuits with depth d and n^{1 + delta} wires, where delta = exp(-O(d)), using seed length O(n^{1 - delta}) and with error exp(-n^{delta}). This tightly matches the best known lower bounds for this circuit class. As a consequence of our result, all the known hardness for LTF circuits has now effectively been translated into pseudorandomness. This brings the extensive effort in the last decade to construct PRGs and deterministic circuit-analysis algorithms for this class to the point where any subsequent improvement would yield breakthrough lower bounds.


A key ingredient in our construction is a pseudorandom restriction procedure that has tiny failure probability, but simplifies the function to a non-natural "hybrid computational model" that combines decision trees and LTFs. As part of our proof we also construct an "extremely low-error" PRG for the class of functions computable by an arbitrary function of s linear threshold functions that can handle even the extreme setting of parameters s = n/polylog(n) and epsilon = exp(-n/polylog(n)).


Joint work with Pooya Hatami, Avishay Tal, and Roei Tell.

February 19: Jason Li: Deterministic Mincut in Almost-Linear Time

Abstract: We present a deterministic (global) mincut algorithm for weighted, undirected graphs that runs in m^{1+o(1)} time, answering an open question of Karger from the 1990s. To obtain our result, we de-randomize the construction of the skeleton graph in Karger’s near-linear time mincut algorithm, which is its only randomized component. In particular, we partially de-randomize the well-known Benczur-Karger graph sparsification technique by random sampling, which we accomplish by the method of pessimistic estimators. Our main technical component is designing an efficient pessimistic estimator to capture the cuts of a graph, which involves harnessing the expander decomposition framework introduced in recent work by Goranci et al. (SODA 2021). As a side-effect, we obtain a structural representation of all approximate mincuts in a graph, which may have future applications.

February 26: Samson Zhou: Tight Bounds for Adversarially Robust Streams and Sliding Windows via Difference Estimators

Abstract: We introduce difference estimators for data stream computation, which provide approximations to F(v)-F(u) for frequency vectors v,u and a given function F. We show how to use such estimators to carefully trade error for memory in an iterative manner. The function F is generally non-linear, and we give the first difference estimators for the frequency moments F_p for p between 0 and 2, as well as for integers p>2. Using these, we resolve a number of central open questions in adversarial robust streaming and sliding window models.

For both models, we obtain algorithms for norm estimation whose dependence on epsilon is 1/epsilon^2, which shows, up to logarithmic factors, that there is no overhead over the standard insertion-only data stream model for these problems.


Joint work with David P. Woodruff


BIO: Samson is a postdoctoral researcher at Carnegie Mellon University, hosted by David P. Woodruff. He received his PhD from Purdue, where he was advised by Greg Frederickson and Elena Grigorescu and hosted by Jeremiah Blocki. He spent a year as a postdoctoral researcher at Indiana University, hosted by Grigory Yaroslavtsev. His research focuses on the theoretical foundations of data science, including sublinear algorithms with an emphasis on streaming algorithms, machine learning, and numerical linear algebra.

March 4 (Thursday): Michal Feldman: Prophet and Secretary Online Algorithms for Matching in General Graphs

Abstract: A common tension in market scenarios is choosing the right timing to commit to a decision. This tension is captured by the mathematical literature of optimal stopping theory, within two models that have become to be known as the secretary problem and the prophet inequality. In these models, a sequence of originally-unknown values arrive one by one. Upon arrival, the online algorithm observes the value and should decide whether or not to accept it. In secretary settings, the value sequence is arbitrary, but the values arrive in a uniformly random order. In prophet settings, every value is drawn from a known probability distribution, but the arrival order is arbitrary.


In this talk I will review the basic settings of secretary and prophet, as well as previous extensions to matching in bipartite graphs with 1-sided vertex arrival. I will then present our recent work, which studies online algorithms (in both secretary and prophet settings) for matching in *general* graphs, under both vertex- and edge-arrival models. We provide tight competitive ratios for both secretary and prophet matching scenarios under vertex arrival. Under edge arrival, we provide competitive ratios that improve upon the state of the art.


Based on joint work with Tomer Ezra, Nick Gravin and Zhihao Tang.

March 12: Uri Stemmer: Adversarial Streaming, Differential Privacy, and Adaptive Data Analysis

Abstract:

Streaming algorithms are algorithms for processing large data streams, using only a limited amount of memory. Classical streaming algorithms typically work under the assumption that the input stream is chosen independently from the internal state of the algorithm. Recently, there is a growing interest in studying streaming algorithms that maintain utility also when the input stream is chosen by an adaptive adversary, possibly as a function of previous estimates given by the streaming algorithm. Such streaming algorithms are said to be adversarially-robust. In this talk I will highlight two connections with adversarial streaming: A connection with the notion of differential privacy (leading to positive results for adversarial streaming) and a connection with the recent line of work on adaptive data analysis (leading to negative results for adversarial streaming).

This talk is based on joint works with Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, and Kobbi Nissim.

March 19: Suprovat Ghoshal: Approximation Algorithms and Hardness for Strong Unique Games.

Abstract:

The UNIQUE GAMES problem is a central problem in algorithms and complexity theory. Given an instance of UNIQUE GAMES, the STRONG UNIQUE GAMES problem asks to find the largest subset of vertices, such that the UNIQUE GAMES instance induced on them is completely satisfiable. In this work, we give new algorithmic and hardness results for the STRONG UNIQUE GAMES problem.


Given an instance with label set size $k$ where a set of $1 - \eps$ fraction of the vertices induce an instance that is completely satisfiable, our first algorithm produces a set of $1 - \tilde{O}(k^2 \eps \sqrt{\log n})$ fraction of the vertices such that the UNIQUE GAMES induced on them is completely satisfiable. In the same setting, our second algorithm produces a set of $1 - \tilde{O}{k^2} \sqrt{\eps \log d}$ (here $d$ is the largest vertex degree of the graph) fraction of the vertices such that the UNIQUE GAMES induced on them is completely satisfiable The technical core of our results is a new connection between STRONG UNIQUE GAMES and {\em small-set vertex-expansion} in graphs. Complementing this, assuming the Unique Games conjecture, we prove that there exists an absolute constant $C$ such that it is NP-hard to compute a set of size larger than $1 - C \sqrt{\eps \log k \log d}$ such that all the constraints induced on this set are satisfied.


For the UNIQUE GAMES problem, given an instance that has as assignment satisfying $1 - \eps$ fraction of the constraints, there is a polynomial time algorithm [Charikar, Makarychev, Makarychev - STOC 2006] that computes an assignment satisfying $1 - O(\sqrt{\eps \log k})$ fraction of the constraints; [Khot et al. - FOCS 2004] prove a matching (up to constant factors) Unique Games hardness. Therefore, our hardness results suggest that the STRONG UNIQUE GAMES problem might be harder to approximate than the UNIQUE GAMES problem.


Given an undirected graph $G = (V,E)$ the ODD CYCLE TRANSVERSAL problem asks to delete the least fraction of vertices to make the induced graph on the remaining vertices bipartite. As a corollary to our main algorithmic results, we obtain an algorithm that outputs a set $S$ such the graph induced on $V \setminus S$ is bipartite, and $|S|/n \leq O(\sqrt{\eps \log d})$ (here $d$ is the largest vertex degree and $\eps$ is the optimal fraction of vertices that need to be deleted). Assuming the Unique Games Conjecture, we prove a matching (up to constant factors) hardness.

March 26: Matthew S. Weinberg: (a biased selection of) Recent Developments in Combinatorial Auctions

Abstract: In a combinatorial auction there are m items, and each of n players has a valuation function v_i which maps sets of items to non-negative reals. A designer wishes to partition the items into S_1,...,S_n to maximize the welfare (\sum_i v_i(S_i) ), perhaps assuming that all v_i lie in some class V (such as submodular, subadditive, etc.).


Within Algorithmic Game Theory, this problem serves as a lens through which to examine the interplay between computation and incentives. For example: is it the case that whenever a poly-time/poly-communication algorithm for honest players can achieve an approximation guarantee of c when all valuations lie in V, a poly-time/poly-communication truthful mechanism for strategic players can achieve an approximation guarantee of c when all valuations lie in V as well?


In this talk, I’ll give a brief history, then survey three recent results on this topic which:

- provide the first separation between achievable guarantees of poly-communication algorithms and poly-communication truthful mechanisms for any V (joint works with Mark Braverman and Jieming Mao, and with Sepehr Assadi, Hrishikesh Khandeparkar, and Raghuvansh Saxena).

- revisit existing separations between poly-time algorithms and poly-time truthful mechanisms via a new solution concept “Implementation in Advised Strategies” (joint work with Linda Cai and Clayton Thomas).

- resolve the communication complexity of combinatorial auctions for two subadditive players (joint work with Tomer Ezra, Michal Feldman, Eric Neyman, and Inbal Talgam-Cohen, time-permitting).


April 2: Santhoshini Velusamy

Abstract:

TBD

April 9: Andrew Drucker: An Improved Exponential-Time Approximation Algorithm for Fully-Alternating Games Against Nature

Abstract:

"Games against Nature" [Papadimitriou '85] are two-player games of perfect information, in which one player's moves are made randomly. Estimating the value of such games (i.e., winning probability under optimal play by the strategic player) is an important goal in the study of decision-making under uncertainty. The problem's PSPACE-completeness does not rule out non-trivial algorithms, a goal fulfilled in certain cases by Littman, Majercik, and Pitassi [LMP '01].

We study the case in which the players strictly alternate with binary moves, for which [LMP '01] does not give a general speedup. We give a randomized algorithm to approximate the value of such games (and to produce near-optimal play) . Our algorithm achieves exponential savings over brute-force, making 2^{(1-delta)n} queries to the input game's lookup table for some absolute constant delta > 0, and certifies a lower bound on the game value with exponentially-small expected additive error. (On the downside, delta is tiny and the algorithm uses exponential space.)

Our algorithm is recursive, and bootstraps a "base-case" algorithm for fixed-size inputs. The base-case algorithm and the form of recursive composition used are interesting and, we feel, likely to find uses beyond the present work.


April 16: Alina Ene: Adaptive gradient descent methods for constrained optimization

Abstract:

Adaptive gradient descent methods, such as the celebrated Adagrad algorithm (Duchi, Hazan, and Singer; McMahan and Streeter) and ADAM algorithm (Kingma and Ba), are some of the most popular and influential iterative algorithms for optimizing modern machine learning models. Algorithms in the Adagrad family use past gradients to set their step sizes and are remarkable due to their ability to automatically adapt to unknown problem structures such as (local or global) smoothness and convexity. However, these methods achieve suboptimal convergence guarantees even in the standard setting of minimizing a smooth convex function, and it has been a long-standing open problem to develop an accelerated analogue of Adagrad in the constrained setting.

In this talk, we present one such accelerated adaptive algorithm for constrained convex optimization that simultaneously achieves optimal convergence in the smooth, non-smooth, and stochastic setting, without any knowledge of the smoothness parameters or the variance of the stochastic gradients.

The talk is based on joint work with Huy Nguyen (Northeastern University) and Adrian Vladu (CNRS & IRIF, University de Paris), available here: https://arxiv.org/abs/2007.08840.

April 23: Ali Vakilian: Approximation Algorithms for Fair Clustering

Abstract: Growing use of automated machine learning in high-stake decision making tasks has led to an extensive line of research on the societal aspects of algorithms. In particular, the design of fair algorithms for the task of clustering, which is a core problem in both machine learning and algorithms, has received lots of attention in recent years. The Fair Clustering problem was first introduced in the seminal work of (Chierichetti, Kumar, Lattanzi and Vassilvitskii, 2017) where they proposed the “proportionality/balance inside the clusters” as the notion of fairness. Since then, fairness in clustering has been advanced in this setting and has been further studied in other contexts such as equitable representation and individual fairness.


In this talk, I will overview my recent works on the design of efficient approximation algorithms for fair clustering in all of the three aforementioned contexts. My talk is based on two papers co-authored with Arturs Backurs, Piotr Indyk, Yury Makarychev, Krzysztof Onak, Baruch Schieber, Tal Wagner.

Bio. Ali Vakilian is a postdoctoral researcher at the IDEAL institute hosted by TTIC. His research interests include learning-based algorithms, algorithmic fairness, algorithms for massive data and combinatorial optimization. Ali received his PhD from MIT under the supervision of Erik Demaine and Piotr Indyk.

April 30: Dingyu Wang: Information theoretic limits of cardinality estimation: Fisher meets Shannon

Abstract: Estimating the cardinality (number of distinct elements) of a large multiset is a classic problem in streaming and sketching, dating back to Flajolet and Martin’s classic Probabilistic Counting (PCSA) algorithm from 1983. In this paper we study the intrinsic tradeoff between the space complexity of the sketch and its estimation error in the random oracle model. We define a new measure of efficiency for cardinality estimators called the Fisher-Shannon (Fish) number H/I. It captures the tension between the limiting Shannon entropy (H) of the sketch and its normalized Fisher information (I), which characterizes the variance of a statistically efficient, asymptotically unbiased estimator. Our results are as follows.

• We prove that all base-q variants of Flajolet and Martin’s PCSA sketch have Fish-number H0/I0 ≈ 1.98016 and that every base-q variant of (Hyper)LogLog has Fish-number worse than H0/I0, but that they tend to H0/I0 in the limit as q → ∞. Here H0, I0 are precisely defined constants.

• We describe a sketch called Fishmonger that is based on a smoothed, entropy-compressed variant of PCSA with a different estimator function. It is proved that with high probability, Fishmonger processes a multiset of [U] such that at all times, its space is O(log2 log U) + (1 + o(1))(H0/I0)b ≈ 1.98b bits and its standard error is 1/ √ b.

• We give circumstantial evidence that H0/I0 is the optimum Fish-number of mergeable sketches for Cardinality Estimation. We define a class of linearizable sketches and prove that no member of this class can beat H0/I0. The popular mergeable sketches are, in fact, also linearizable.