6th TOCA-SV Day

FRIDAY, MAY 24, 2019, 9:30 AM - 5:30 PM

Register HERE

Parking Information: There are many visitor parking spots here in front of LMK2, but please consider carpooling.

Schedule

0930-1015: Breakfast

1015-1030: Welcome

1030-1115: Aviad Rubinstein (Stanford) SETH vs Approximation

1115-1200: Sergei Vassilvitskii (Google Research) Machine Learning meets Online Algorithms

1200-1245: Phil Long (Google Research) The Singular Values of Convolutional Layers

1245-1400: Lunch (provided)

1400-1445: Prasad Raghavendra (UC Berkeley) Using Semidefinite Programming for Inference -- Case of Robust Community Detection

1445-1530: Rasmus Pagh (Google Research / ITU Copenhagen) Hardness and Approximation of High-Dimensional Search Problems

1530-1600: Coffee Break

1600-1730: Student talks

  • AmirMahdi Ahmadinejad (Stanford) Nearly linear time algorithms for solving M-matrices, and finding Perron vectors
  • Joshua Brakensiek (Stanford) Bridging between 0/1 and linear programming via random walks
  • Clement Cannone (Stanford) Distributed goodness-of-fit: when you can't talk much and have little in common
  • Vaggos Chatziafratis (Stanford) On the computational power of online gradient descent
  • Shivam Garg (Stanford) Sample amplification: Increasing dataset size even when learning is impossible
  • Shweta Jain (UC Santa Cruz) An O(3^(n/3)) algorithm for clique counting
  • Marc Khoury (UC Berkeley) Adversarial training with Voronoi constraints
  • Paul Liu (Stanford) Greedy and local ratio algorithms in the MapReduce model
  • Yang Liu (Stanford) Parallel reachability in almost linear work and square root depth
  • Alireza Rezaei (U Washington) Gibbs sampling for k-determinantal point processes
  • Andrew Stolman (UC Santa Cruz) Random walks and testing for planarity
  • Kevin Tian (Stanford) A rank-1 sketch for matrix multiplicative weights
  • Hongyang Zhang (Stanford) Recovery guarantees for quadratic tensors with limited observations

Abstracts

Aviad Rubinstein (Stanford)

SETH vs Approximation

I will overview progress from the last couple of years on understanding the fine-grained complexity of approximation algorithms, with a focus on nearest-neighbor and string matching problems.

Mostly based on joint works with Amir Abboud, Josh Brakensiek, Lijie Chen, Shafi Goldwasser, Bernhard Haeupler, Kaifeng Lyu, Guy Rothblum, Saeed Seddighin, Amirbehshad Shahrasbi, Zhao Song, Xiaorui Sun, and Ryan Williams.

Sergei Vassilvitskii (Google Research)

Machine Learning meets Online Algorithms

Classic algorithm design focuses on optimizing worst case complexity, resulting in algorithms that perform well for any possible input. In many applications, however, the typical input is not just far from worst case, but has some predictable characteristics. We give a brief overview of recent results that incorporate machine learned predictions into algorithm design. These approaches improve the algorithms’ performance when predictions are correct, and nearly match the best known worst case performance bounds when the predictions are wrong.

Phil Long (Google Research)

The Singular Values of Convolutional Layers

We characterize the singular values of the linear transformation associated with a standard 2D multi-channel convolutional layer, enabling their efficient computation. This characterization also leads to an algorithm for projecting a convolutional layer onto an operator-norm ball. We show that this is an effective regularizer; for example, it improves the test error of a deep residual network using batch normalization on CIFAR-10 from 6.2% to 5.3%.

Prasad Raghavendra (UC Berkeley)

Using Semidefinite Programming for Inference--Case of Robust Community Detection

We propose a new hierarchy of semidefinite programming relaxations for inference problems and demonstrate their application to regular stochastic block models.

A regular stochastic block model (RSBM) is a random graph model wherein the vertices are partitioned into k communities, and a random regular graph is sampled conditioned on a prescribed number of edges between the communities. For example, if we sample a graph with no edges within communities, we obtain a model for sampling k-colorable graphs. The Kesten-Stigum (KS) threshold is widely conjectured to mark a computational threshold for the problem of community detection for both RSBMs and general (irregular) SBMs. In this work, we apply our hierarchy of SDP relaxations to RSBMs, and prove the following results:

  1. For every ε, there exists mN such that the mth level of the SDP hierarchy can detect communities in a dissortative RSBM which is ε-above the KS threshold.
  2. The resulting SDP based algorithm is robust to o(n)-perturbations of the graph.
  3. Below the KS threshold, no constant level of our hierarchy can solve the community detection in an RSΒΜ.BM.

Rasmus Pagh (Google Research / ITU Copenhagen)

Hardness and Approximation of High-Dimensional Search Problems

The need to perform search in a collection of high-dimensional vectors arises in many areas of computer science including databases, image analysis, information retrieval, and machine learning. In contrast to lower-dimensional settings, we do not know of worst-case efficient data structures for such search problems. In fact, if we make no assumptions on the input there is no known way of doing significantly better than brute force. The talk surveys recent developments in the theoretical study of high dimensional search problems, including:

  • Conditional hardness results linking search problems to well-known computationally hard problems such as k-SAT.
  • Upper bounds for approximate high-dimensional search using locality-sensitive maps and filters, and work towards derandomizing these algorithms.
  • Surprising upper bounds in batched settings where there are many simultaneous searches.

Towards the end of the talk I will discuss recent work proving lower bounds for k-Nearest Neighbors in the indexability model that abstracts external-memory data structures.