Michigan-Purdue Theory Seminar
Fall 2020

Date: Friday's, 10:00 AM EST, at the following Zoom meeting room: purdue-edu.zoom.us/j/96106431696

Organizers: Greg Bodwin (bodwin@umich.edu), Euiwoong Lee (euiwoong@umich.edu ), and Kent Quanrud (krq@purdue.edu). Please reach out to any of the organizers if you would like to give a talk!

iCal link

September 11: Yannai A. Gonczarowski: The Menu-Size of Approximately Optimal Auctio

Abstract:

We consider a monopolist who wishes to sell n goods to a single additive buyer, where the buyer's valuations for the goods are drawn according to independent distributions. It is well known that—unlike in the single item case—the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries, that is, offering the buyer a choice between a continuum of lottery tickets. It is also known that simple auctions with a finite bounded number of menu entries (lotteries for the buyer to choose from) can extract a constant fraction of the optimal revenue, as well as that for the case of bounded distributions it is possible to extract an arbitrarily high fraction of the optimal revenue via a finite bounded menu size. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size, when the valuation distributions possibly have unbounded support (or via a finite bounded menu size when the support of the distributions is bounded by an unknown bound), remained open since the seminal paper of Hart and Nisan (2013), and so has the question of any lower-bound on the menu-size that suffices for extracting an arbitrarily high fraction of the optimal revenue when selling a fixed number of goods, even for two goods and even for i.i.d. bounded distributions.


In this talk, we resolve both of these questions. We first show that for every n and for every ε>0, there exists a menu-size bound C=C(n,ε) such that auctions of menu size at most C suffice for extracting a (1-ε) fraction of the optimal revenue from any valuation distributions, and give an upper bound on C(n,ε), even when the valuation distributions are unbounded and nonidentical. We then proceed to giving two lower bounds, which hold even for bounded i.i.d. distributions: one on the dependence on n when ε=1/n and n grows large, and the other on the dependence on ε when n is fixed and ε grows small. Finally, we apply these upper and lower bounds to derive results regarding the deterministic communication complexity required to run an auction that achieves such an approximation.


Based upon:

* The Menu-Size Complexity of Revenue Approximation, Moshe Babaioff, Y. A. G., and Noam Nisan, STOC 2017

* Bounding the Menu-Size of Approximately Optimal Auctions via Optimal-Transport Duality, Y. A. G., STOC 2018


Bio:

Yannai Gonczarowski is a postdoctoral researcher at Microsoft Research New England. His main research interests lie in the interface between the theory of computation, economic theory, and game theory—an area commonly referred to as Algorithmic Game Theory. In particular, Yannai is interested in various aspects of complexity in mechanism design (where mechanisms are defined broadly from auctions to matching markets), including the interface between mechanism design and machine learning. Yannai received his PhD from the Departments of Math and CS, and the Center for the Study of Rationality, at the Hebrew University of Jerusalem, where he was advised by Sergiu Hart and Noam Nisan, as an Adams Fellow of the Israel Academy of Sciences and Humanities. Throughout most of his PhD studies, he was also a long-term research intern at Microsoft Research in Herzliya. He holds an M.Sc. in Math (summa cum laude) and a B.Sc. in Math and CS (summa cum laude, Valedictorian) from the Hebrew University. Yannai is also a professionally-trained opera singer, having acquired a bachelor’s degree and a master’s degree in Classical Singing at the Jerusalem Academy of Music and Dance. For his doctoral dissertation, Yannai was awarded the Hans Wiener Prize of the Hebrew University of Jerusalem, the 2018 Michael B. Maschler Prize of the Israeli Chapter of the Game Theory Society, and the ACM SIGecom Doctoral Dissertation Award for 2018. Yannai is also the recipient of the ACM SIGecom Award for Best Presentation by a Student or Postdoctoral Researcher at EC’18, and of the Best Paper Award at MATCH-UP’19. His first textbook, "Mathematical Logic through Python" (Gonczarowski and Nisan), which introduces a new approach to teaching the material of a basic Logic course to Computer Science students, tailored to the unique intuitions and strengths of this cohort of students, is forthcoming in Cambridge University Press.

September 18: Sepideh Mahabadi: Non-Adaptive Adaptive Sampling in Turnstile Streams

Abstract:

Adaptive sampling is a useful algorithmic tool for data summarization problems in the classical centralized setting, where the entire dataset is available to the single processor performing the computation. Adaptive sampling repeatedly selects rows of an underlying n by d matrix A, where n>>d, with probabilities proportional to their distances to the subspace of the previously selected rows. Intuitively, adaptive sampling seems to be limited to trivial multi-pass algorithms in the streaming model of computation due to its inherently sequential nature of assigning sampling probabilities to each row only after the previous iteration is completed. Surprisingly, we show this is not the case by giving the first one-pass algorithms for adaptive sampling on turnstile streams and using space poly(d,k,log n), where k is the number of adaptive sampling rounds to be performed.


Our adaptive sampling procedure has a number of applications to various data summarization problems on turnstile streams that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model. This includes column subset selection, subspace approximation, projective clustering, and volume maximization. We complement our volume maximization algorithmic results with lower bounds that are tight up to lower order terms, even for multi-pass algorithms. By a similar construction, we also obtain lower bounds for volume maximization in the row-arrival model, which we match with competitive upper bounds.


This is a joint work with Ilya Razenshteyn, David Woodruff, and Samson Zhou.

September 25: Karthik Chandrasekaran: Hypergraph k-cut for fixed k in deterministic polynomial time

Abstract:

In the hypergraph k-cut problem, the input is a hypergraph along with a fixed constant k and the goal is to find a smallest subset of hyperedges whose removal ensures that the remaining hypergraph has at least k connected components. The graph k-cut problem is solvable in polynomial time (Goldschmidt and Hochbaum, 1994) while the complexity of the hypergraph k-cut problem was open. In this talk, I will present the first deterministic polynomial-time algorithm to solve the hypergraph k-cut problem. Our algorithm relies on novel structural results that provide new insights even for the graph k-cut problem.

Based on joint work with Chandra Chekuri.

October 2: Andrea Lincoln: New Techniques for Proving Fine-Grained Average-Case Hardness

Abstract: In this talk I will cover a new technique for worst-case to average-case reductions. There are two primary concepts introduced in this talk: "factored" problems and a framework for worst-case to average-case fine-grained (WCtoACFG) self reductions.

We will define new versions of OV, kSUM and zero-k-clique that are both worst-case and averagecase fine-grained hard assuming the core hypotheses of fine-grained complexity. We then use these as a basis for fine-grained hardness and average-case hardness of other problems. Our hard factored problems are also simple enough that we can reduce them to many other problems, e.g. to edit distance, k-LCS and versions of Max-Flow. We further consider counting variants of the factored problems and give WCtoACFG reductions for them for a natural distribution.

To show hardness for these factored problems we formalize the framework of [Boix-Adsera et al. 2019] that was used to give a WCtoACFG reduction for counting k-cliques. We define an explicit property of problems such that if a problem has that property one can use the framework on the problem to get a WCtoACFG self reduction.

In total these factored problems and the framework together give tight fine-grained average-case hardness for various problems including the counting variant of regular expression matching.

Based on joint work with Mina Dalirrooyfard and Virginia Vassilevska Williams.

October 9: Vaggos Chatziafratis: Aggregating Inconsistent Information in Ranking, Clustering and Phylogenetic Trees

Abstract: We revisit standard data analysis tasks, where the goal is to construct a ranking, clustering or phylogenetic tree on $n$ elements by aggregating local information on overlapping subsets of the elements. Such information includes ordering constraints for ranking (e.g., pairwise comparisons), ``must-link/cannot-link'' constraints for clustering, whereas for hierarchical clustering, the analogous constraints are ``desired/forbidden'' triplets or quartets, which specify ordering relations to be included/avoided in the final output. Inevitably, such constraints may be inconsistent with each other, so the goal becomes to maximize the extent of agreement with the collected information.


In this paper, we study approximation algorithms and their limitations for these basic tasks in the worst-case and beyond. On the positive side, we achieve better approximations for all considered problems and often overcome established worst-case hardness, under a simple query model for obtaining noisy constraints from a ground-truth solution. A common underlying theme in our algorithms is the use of Max Cut on graphs with negative edges. On the negative side, we address an open question regarding ordering problems on trees (triplets/quartets consistency), raised in previous works in computational biology, by presenting near optimal hardness of approximation. As a consequence, we get optimal hardness for the forbidden triplets problem and to the best of our knowledge, this is the first tight hardness of approximation result for a constraint satisfaction problem on trees.


Based on joint work with Mohammad Mahdian and Sara Ahmadian.

October 16: Hemanta Maji: Computational Hardness of Optimal Fair Computation

Abstract: Characterizing the limits of achievable security using the most conservative hardness of computation assumptions is one of the fundamental principles of cryptographic research. For instance, guaranteeing output delivery to honest parties when adversarial parties may abort prematurely during the protocol execution is significantly challenging. Against such powerful adversaries, the hardness of computation assumptions necessary and sufficient to achieve various security levels is relatively not well understood.

In the early 1980s, groundbreaking research introduced coin-tossing protocols using private-key cryptographic primitives to ensure that an adversary can change the honest party's output distribution by at most 1/sqrt(r) in r-round protocols. Two decades later, a sequence of seminal results, relying on strong public-key cryptographic primitives, constructed protocols where an adversary can change the output distribution only by 1/r, which coincides with the optimal achievable security.

In light of these results, it is natural to wonder whether public-key cryptographic primitives are necessary to achieve this higher security threshold. Furthermore, do the coin-tossing protocols from the 1980s achieve the best possible security relying on private-key cryptographic primitives? This talk shall present recent results resolving these long-standing foundational problems in cryptography.

The key technical innovation is a potential-based proof-technique introduced in a sequence of our recent works.

October 23 at 2PM (!): Avishay Tal: Towards Optimal Separations between Quantum and Randomized Query Complexities

Abstract: The query model offers a concrete setting where quantum algorithms are provably superior to randomized algorithms. Beautiful results by Bernstein-Vazirani, Simon, Aaronson, and others presented partial Boolean functions that can be computed by quantum algorithms making much fewer queries compared to their randomized analogs. Till recently, separations of $O(1)$ vs. $\sqrt{N}$ between quantum and randomized query complexities remained the state-of-the-art (where N is the input length), leaving open the question of whether $O(1)$ vs. $N^{1/2+\Omega(1)}$ separations are possible?


We answer this question in the affirmative. Our separating problem is a variant of the Aaronson-Ambainis k-fold Forrelation problem. For any constant $\epsilon>0$, we give a $O(1)$ vs. $\Omega(N^{2/3-\epsilon})$ separation between the quantum and randomized query complexities of the k-fold Forrelation problem, for an appropriate value of k.


Our proof is Fourier analytical and uses new bounds on the Fourier spectrum of classical decision trees, which could be of independent interest. The proof exploits the differences between sparse polynomials and dense polynomials that arise when analyzing quantum query algorithms versus randomized query algorithms. We further conjectured that the Fourier bounds could be improved in a precise manner. In an exciting development, the conjecture was recently proved by Sherstov, Storozhenk, and Wu -- yielding optimal $O(1)$ vs. $N^{1-\epsilon}$ separations between the quantum and randomized query complexities of the k-fold Forrelation problem.


Short Bio: Avishay Tal is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Avishay received his M.Sc. degree in 2012 under the supervision of Amir Shpilka and his Ph.D. in 2015 under the supervision of Ran Raz. He was a post-doc fellow at the Institute for Advanced Study (2015-2017), Stanford University (2017-2019), and the Simons Institute for the Theory of Computing (Fall 2018). His research interests lie within theoretical computer science and include complexity theory at large, analysis of Boolean functions, circuit complexity, pseudorandomness, computational learning theory, and quantum computing.


October 30: Parinya Chalermsook: Coloring and Maximum Weight Independent Set of Rectangles

Abstract: In 1960, Asplund and Grünbaum proved that every intersection graph of axis-parallel rectangles in the plane admits an O(ω^2)-coloring, where ω is the maximum size of a clique. We present the first asymptotic improvement over this six-decade-old bound, proving that every such graph is O(ω log ω)-colorable and presenting a polynomial-time algorithm that finds such a coloring. This improvement leads to a polynomial-time O(log log n)-approximation algorithm for the maximum weight independent set problem in axis-parallel rectangles, which improves on the previous approximation ratio of O(log n log log n).

November 6: Matthew Fahrbach: Edge-Weighted Online Bipartite Matching

Abstract: Online bipartite matching is one of the most fundamental problems in the online algorithms literature. Karp, Vazirani, and Vazirani (STOC 1990) introduced an elegant algorithm for unweighted bipartite matching that achieves an optimal competitive ratio of $1-1/e$. Aggarwal et al. (SODA 2011) later generalized their algorithm and analysis to the vertex-weighted case. Little is known, however, about the most general edge-weighted problem aside from the trivial $1/2$-competitive greedy algorithm. In this paper, we present the first online algorithm that breaks the long-standing $1/2$ barrier and achieves a competitive ratio of at least $0.5086$. In light of the hardness result of Kapralov, Post, and Vondr\'ak (SODA 2013) that restricts beating a $1/2$ competitive ratio for the more general problem of monotone submodular welfare maximization, our result can be seen as strong evidence that edge-weighted bipartite matching is strictly easier than submodular welfare maximization in the online setting.

The main ingredient in our online matching algorithm is a novel subroutine called \emph{online correlated selection} (OCS), which takes a sequence of pairs of vertices as input and selects one vertex from each pair. Instead of using a fresh random bit to choose a vertex from each pair, the OCS negatively correlates decisions across different pairs and provides a quantitative measure on the level of correlation. We believe our OCS technique is of independent interest and will find further applications in other online optimization problems.

November 13: Hung Le: A Unified and Fine-Grained Approach for Light Spanners

Abstract: Seminal works on light spanners from recent years provide near-optimal tradeoffs between the stretch and lightness of spanners in general graphs, minor-free graphs, and doubling metrics. In FOCS'19 the authors provided a ``truly optimal'' tradeoff for Euclidean low-dimensional spaces. Some of these papers employ inherently different techniques than others. Moreover, the runtime of these constructions is rather high.

In this work we present a unified and fine-grained approach for light spanners. Besides the obvious theoretical importance of unification, we demonstrate the power of our approach in obtaining a plethora of new results with: (1) improved lightness bounds, and (2) faster construction times.

November 20: Ioannis Caragiannis: Impartial selection, additive approximation guarantees, and priors

Abstract: We study the problem of impartial selection, a topic that lies at the intersection of computational social choice and mechanism design. The goal is to select the most popular individual among a set of community members. The input can be modelled as a directed graph, where each node represents an individual, and a directed edge indicates nomination or approval of a community member to another. An impartial mechanism is robust to potential selfish behaviour of the individuals and provides appropriate incentives to voters to report their true preferences by ensuring that the chance of a node to become a winner does not depend on its outgoing edges. The goal is to design impartial mechanisms that select a node with an in-degree that is as close as possible to the highest in-degree.

Recent progress has identified impartial selection rules with optimal approximation ratios. It was noted though that worst-case instances are graphs with few vertices. Motivated by this fact, we propose the study of additive approximation, the difference between the highest number of nominations and the number of nominations of the selected member, as an alternative measure of quality. We present randomized impartial selection mechanisms with additive approximation guarantees of o(n), where n is the number of nodes in the input graph.

We furthermore demonstrate that prior information on voters' preferences can be useful in the design of simple (deterministic) impartial selection mechanisms with good additive approximation guarantees. In this direction, we consider different models of prior information and analyze the performance of a natural selection mechanism that we call approval voting with default (AVD).

The talk is based on joint work with George Christodoulou and Nicos Protopapas. No advanced mathematical background (besides basic knowledge on graphs and probability) is required to follow the talk.

Bio: Ioannis Caragiannis joined the Department of Computer Science of Aarhus University as a professor in August 2020. Before, he was a faculty member at the Department of Computer Engineering and Informatics of the University of Patras (since 2006), where he served as Director of its Division of Foundations and Applications of Computer Science (2017-2020). His research interests include design and analysis of algorithms, economics and computation, and foundations of machine learning and artificial intelligence. He has published 160 papers in conference proceedings, scientific journals, and books, and has participated in basic research projects funded by the European Commission and the Greek state. He regularly serves as a program committee member in conferences at the interface of theoretical computer science, artificial intelligence, and economics, such as ACM Conference on Economics and Computation (EC) and the International Joint Conference on Artificial Intelligence (IJCAI). Recently, he was program co-chair of the 15th International Conference on Web and Internet Economics (WINE 2019). Ioannis Caragiannis is a member of the Association for Computing Machinery (ACM, SIGACT, SIGECOM), the Association for the Advancement of Artificial Intelligence (AAAI), and the European Association for Theoretical Computer Science (EATCS).

December 4: Aditya Ravi: New Analysis of the Factor Refinement Algorithm with Applications

Abstract: In this talk I will discuss a new, simple characterization of the output of the Factor Refinement Algorithm, formally introduced and analyzed by Bach, Driscoll, and Shallit in 1993. Our characterization relies only on basic linear algebra. As applications, we obtain new relations between computing the square-free decomposition of an integer, Euler’s totient function, the radical of an integer, and some other multiplicative functions. While some of the above connections were already established in the literature, we believe that our contribution makes them simpler and more explicit.


Based on Joint work with Dr. Ilya Volkovich

December 11: Jan van den Brand : Fast algorithms for linear programs and bipartite matching via new data structures and interior-point methods

Abstract:

Linear Programs (LPs) capture many optimization problems such as shortest paths or bipartite matching. In the past years, there have been substantial improvements for LP solvers, resulting in algorithms that run in nearly linear time for dense LPs and dense graphs. In this talk, I will explain how these improvements came to be from an interplay of interior point methods and dynamic algorithms (data structures).


Interior point methods can be seen as a framework for constructing iterative algorithms that solve linear programs. Like many other iterative algorithms, they can be sped up via data structures that efficiently maintain the intermediate solution in each iteration. Recent advances in the area of dynamic algorithms (data structures) showed that LP solvers could be improved if the interior point method could be made more robust against approximation errors.


So while the interior point method motivates research in the area of dynamic algorithms, results in dynamic algorithms also motivated further research on interior-point methods. This cycle leads to repeated improvements in both areas and eventually resulted in nearly linear time solvers for dense LPs and nearly linear time algorithms for bipartite matching on moderately dense graphs.