Robert Andrews: On Matrix Multiplication and Polynomial Identity Testing
Abstract: Determining the complexity of matrix multiplication is a fundamental problem of theoretical computer science. It is popularly conjectured that matrices can be multiplied in nearly-linear time. If true, this conjecture would yield similarly-fast algorithms for a wide array of problems in linear algebra and beyond. However, if such near-linear time algorithms for matrix multiplication do not exist, is it possible to leverage the hardness of multiplying matrices to design algorithms for other problems? In this talk, I will describe how lower bounds on the complexity of matrix multiplication can be used to design a faster deterministic algorithm to test if two polynomials, encoded as algebraic circuits, are equal.
Vishwas Bhargava: Fast multivariate multipoint evaluation
Abstract: Multipoint evaluation is the computational task of evaluating a polynomial given as a list of coefficients at a given set of inputs. A straightforward algorithm for this problem is to just iteratively evaluate the polynomial at each of the inputs. The question of obtaining faster-than-naive (and ideally, close to linear time) algorithms for this problem is a natural and fundamental question in computational algebra. In addition to its own inherent interest, faster algorithms for multipoint evaluation are closely related to fast algorithms for other natural algebraic questions like polynomial factorization and modular composition.
In this talk, I will briefly survey the state of the art for this problem, and delve into recent advancements and underlying techniques.
Cornelius Brand: Algebraic Methods in Algorithms
Abstract: Recent years have seen a surge in algebraic methods for the design of faster algorithms for computationally intractable problems, such as matroid intersection, finding long simple cycles in graphs, and certain network design tasks. The algebraic tools employed to this end include e.g. group algebras, (tensor products of) exterior algebras, apolar algebras and Waring decompositions.
In this talk, I will survey these developments, outline connections both within algebra as well as to established combinatorial techniques (e.g. Color-Coding or representative families), and highlight some central open questions concerning the abovementioned algebraic protagonists.
Prerona Chatterjee: Monotone Classes Beyond VNP
Abstract: In this talk, we will look at the natural monotone analogues of various equivalent definitions of VPSPACE: a well studied class (Poizat 2008, Koiran & Perifel 2009, Malod 2011, Mahajan & Rao 2013) that is believed to be larger than VNP. We will see that these monotone analogues are not equivalent unlike their non-monotone counterparts.
We will define monotone VPSPACE (mVPSPACE) to be the monotone analogue of Poizat’s definition since it satisfies several desirable closure properties that the other analogues may not. With this definition, we will show that the permanent is contained in mVPSPACE which allows us to show that this class is exponentially stronger than mVNP.
This talk is based on joint work with Kshitij Gajjar and Anamay Tengse (http://arxiv.org/abs/2202.13103).
Radu Curticapean: An algebraic perspective on homomorphism counts
Abstract: Graph homomorphism counts are prominent in counting complexity, e.g., in the study of partition functions, or in the context of counting occurrences of small patterns in large graphs. In the latter setting, it has been observed that many interesting pattern counts are in fact linear combinations of homomorphism counts from some fixed set of graphs. Algebraic properties of homomorphism counts can then be used to isolate any hard term from this linear combination, thus proving hardness of the overall linear combination. We survey these techniques and show possible applications in algebraic complexity.
Prateek Dwivedi: Deterministic identity testing paradigms for bounded top-fanin depth 4 circuits
Abstract: Deterministically solving the Polynomial Identity Testing (PIT) problem is a fundamental computational problem, and it is of pivotal importance in Algebraic Complexity Theory. This intrinsically difficult problem has found its way into many areas of Computer Science. In this talk, we will introduce the problem from the perspective of Algebraic Complexity Theory, discuss its motivations and connections.
Despite decades of effort, there has been little progress on the problem in general. Moreover, under strong restrictions on the circuits, we already have very efficient algorithms. One such well-motivated restriction we consider is depth-4 circuits. Special cases of these circuits are a great source of inspiration for ideas of designing PIT algorithms. However, their inapplicability led us to examine from a different viewpoint. In the talk, we will share these ideas, which helped us in designing efficient algorithms. Further, we will exhibit our newly developed paradigm for designing PIT algorithms.
The results in this talk are a joint work with Pranjal Dutta (CMI & IIT Kanpur) and Nitin Saxena (IIT Kanpur). They appeared in CCC 2021.
Venkatesan Guruswami: Maximally Recoverable Codes from Skew Polynomials, or "How Non-Commutativity Helps Data Centers"
Abstract: Locally Repairable Codes (LRCs) allow for recovery from a small number of erasures in a local manner based on just a few other codeword symbols. A maximally recoverable (MR) LRC offers the best possible blend of locality and global erasure resilience, guaranteeing recovery from all erasure patterns which are information-theoretically correctable given the presence of local repair groups. This makes them attractive for use in distributed storage systems and indeed they have been deployed in certain parameter regimes.
Random constructions can easily show the existence of MR LRCs over very large fields, but a major algebraic challenge is to construct MR LRCs, or even show their existence, over smaller fields, as well as understand inherent lower bounds on their field size. We will discuss a framework to based on skew polynomials (a non-commutative analog of polynomial rings that dates back to (Ore, 1933)) that yields MR LRCs over the smallest known alphabets in many practically relevant parameter regimes, including matching a lower bound in an interesting case.
The talk will introduce maximally recoverable codes, and describe skew polynomials and non-singular matrices constructed using them which lead to good MR LRCs. Time permitting, we will sketch how algebraic results underlying list decoding folded Reed-Solomon and multiplicity codes can be captured in a unified way within this theory.
Based on joint work with Sivakanth Gopi.
Christian Ikenmeyer: Kronecker coefficients: Recent progress in algebraic combinatorics via computational complexity theory
Abstract: Proving algebraic complexity lower bounds for border complexity is equivalent to proving the existence of certain meta-polynomials. Studying the representation theoretic structure of such meta-polynomials, as it is proposed in the Mulmuley-Sohoni geometric complexity theory approach, leads to classical questions in algebraic combinatorics, namely problem 9 and 10 in Stanley's list from 2000 on Positivity Problems and Conjectures in Algebraic Combinatorics. In this talk we focus on problem 10: Finding a combinatorial interpretation of the Kronecker coefficients. A fruitful approach to this question is its formalization via the counting complexity class #P. We rule out a promising approach to problem 10 via reduced Kronecker coefficients (arXiv:2305.03003), we discuss general #P non-membership techniques that have connections to algebraic geometry (arXiv:2204.13149, FOCS 2022), we apply these techniques to the characters of the symmetric group (arXiv:2207.05423, SODA 2023), and we show that more refined arguments are needed to resolve problem 10 (arXiv:2307.02389). The argument in 2307.02389 works by constructing a quantum algorithm that proves that the Kronecker coefficient computation is in the "quantum counting" class #BQP.
No prior knowledge in representation theory is required for the talk.
This is joint work with Igor Pak, Greta Panova, and Sathyawageeswar Subramanian.
Neeraj Kayal: Applications of Algebraic Complexity to Unsupervised Learning
Abstract: We formulate an abstract computational problem called Robust Vector Space Decomposition and give an efficient algorithm for it. This abstract problem provides a single framework that captures a wide variety of learning problems, including the well-studied problems of learning mixtures of Gaussians, tensor decomposition, subspace clustering, and robust reconstruction of various classes of arithmetic formulas.
The problem is formally described as follows: Let U = U_1 ⊕ · · · ⊕ U_r and V = V_1 ⊕ · · · ⊕ V_r be vector spaces, and suppose B is a space of linear operators from U to V, such that each U_i is to V_i under the action of B. Then, given the spaces U,V, and B approximately, the goal is to find (upto reordering) the vector spaces U_1, . . . ,U_r, V_1, . . . ,V_r approximately.
We will see that the linear operators used to prove lower bounds for arithmetic formulas are also the ones that can be used to reduce the above application problems to vector space decomposition.
The provable guarantees that we obtain on the goodness of our approximation relies on the smallest non-zero singular values of certain linear operators being not too small. We conjecture that for smoothed instances of our applications this is indeed the case, and hence, if true, the conjecture would imply that these problems admit efficient algorithms in the smoothed setting.
Swastik Kopparty: The Elliptic Curve Fast Fourier Transform (ECFFT)
Abstract: This is based on the papers ECFFT I (Fast algorithms for polynomials over all fields) and ECFFT II (Scalable and Transparent proofs over all large fields), both joint work with Eli Ben-Sasson, Dan Carmon, and David Levit.
I will talk about a variant (the ECFFT) of the FFT which is based on elliptic-curve groups in place of multiplicative groups. While the classical FFT over finite fields is directly applicable only when the size of the multiplicative group of the field is special, the ECFFT turns out to be directly applicable over all finite fields (because all finite fields have *some* elliptic curve group whose size is special).
We then use the ECFFT in place of the FFT for applications in fast polynomial algorithms and interactive property testing.
Deepanshu Kush: Near-Optimal Lower Bounds for Set-Multilinear Formulas
Abstract: The seminal work of Raz (J. ACM 2013) as well as the recent breakthrough results by Limaye, Srinivasan, and Tavenas (FOCS 2021, STOC 2022) have demonstrated a potential avenue for obtaining lower bounds for general algebraic formulas, via strong enough lower bounds for set-multilinear formulas.
In this talk, along with discussing the impact of these work and several others in the literature, I will discuss some new results in this area, namely strong and near-optimal lower bounds for set-multilinear circuits/formulas in the constant (or low) depth setting, as well as the unbounded depth setting.
This is based on joint work with Shubhangi Saraf.
Partha Mukhopadhyay: The noncommutative Edmonds' problem re-visited
Abstract: Let T be a matrix whose entries are linear forms over noncommutative variables. The noncommutative Edmonds' problem is to determine whether T is invertible in the free skew field. Currently, there are three different deterministic polynomial-time algorithms known to solve this problem: using operator scaling (Garg-Gurvits-Oliveira-Wigderson, 2016), algebraic methods (Ivanoys-Qiao-Subrahmanyam, 2018), and convex optimization (Hamada-Hirai, 2021).
We will discuss a relatively simpler algorithm based on the work of (Ivanoys-Qiao-Subrahmanyam, 2018). The simplification comes from the connection to the polynomial identity testing (PIT) of noncommutative algebraic branching programs (ABPs) (Raz-Shpilka, 2005). This is a joint work with Abhranil Chatterjee.
Noga Ron-Zewi: Highly-efficient local proofs
Abstract: The celebrated PCP theorem from the 90's shows that any mathematical proof can be encoded in such a way that its correctness can be verified locally by reading only a tiny number of bits from the encoding. A fundamental question that has drawn a great amount of interest is what is the minimal overhead in encoding that is needed to allow for such highly efficient local verification. While the original PCP theorem only guarantees a polynomial overhead, a beautiful line of work has culminated in remarkably short encodings with only a poly-logarithmic overhead.
Motivated by cryptographic applications, we study a relatively new interactive variant of PCPs, called Interactive Oracle Proofs, and show that for this model the overhead in the encoding can be made arbitrarily small (approaching 1), and moreover, the prover complexity overhead can be made constant.
The improved efficiency was obtained by replacing polynomial-based codes, commonly used in such proof systems, with more efficient (tensor-based) codes. In particular, these constructions bypassed a barrier imposed by the need to encode the computation using a multiplication code.
Based on Joint work with Ron Rothblum
Chandan Saha: An equivalence test for design polynomials
Abstract: A homogeneous polynomial g is a t-design polynomial if the degree of the gcd of any pair of monomials of g is at most t-1. The power symmetric polynomial and the sum-product polynomial are instances of design polynomials for t=1. Another example is the Nisan-Wigderson design polynomial. Given black-box access to an n-variate, degree d homogeneous polynomial f(x), how fast can we check if there exist an invertible n x n matrix A and a vector b such that f(Ax+b) is a t-design polynomial? We call this problem "equivalence test for design polynomials".
In the talk, we will present a randomized algorithm that finds (A, b) such that f(Ax+b) is a t-design polynomial, if such A and b exist, provided t < d/3. The algorithm runs in time poly(n^t, d) and works over any sufficiently large field of characteristic 0 or > d. As a corollary, we obtain that equivalence test for "random" sparse polynomials can be solved in poly-time. This further implies that using random sparse polynomials to implement Patarin's authentication scheme makes the scheme vulnerable to attacks.
(Joint work with Omkar Baraskar and Agrim Dewan.)
Shubhangi Saraf: Linear Independence, Alternants, and Applications
Abstract: In this talk, I will discuss a method for analyzing the linear independence of multivariate polynomials and then use it to derive some results for polynomial identity testing and arithmetic circuit reconstruction. At the core of our method is a "Small Witness for Linear Independence" lemma, which states the following. Suppose we are given k polynomials in n variables over some base field F, such that the polynomials are linearly independent over F. Then one can set all but k-1 of the variables to random field constants, such that the resulting polynomials continue to be linearly independent with high probability. We will see how to effectively combine this lemma with the use of the Alternant matrix to obtain our results. The talk is based on joint work with Vishwas Bhargava and Ilya Volkovich.
Nitin Saxena: Demystifying the border of depth-3 algebraic circuits
Abstract: Border (or approximative) complexity of polynomials plays an integral role in GCT approach to P!=NP. This raises an important open question: can a border circuit be *efficiently* debordered (i.e. convert from approximative to exact)? Or, could the approximation involve exponential-precision which may not be efficiently simulable? Circuits of depth 3 or 4, are a good testing ground for this question.
Kumar (ToCT'20) proved the universal power of the border of top-fanin-2 depth-3 circuits. We recently solved some of the related open questions. In this talk we will outline our result: border of bounded-top-fanin depth-3 circuits is relatively easy-- it can be computed by a polynomial-size algebraic branching program (ABP). Our de-bordering paradigm has many applications, especially in identity testing and lower bounds.
Based on the works with Prateek Dwivedi & Pranjal Dutta (CCC 2021) + (FOCS 2021, invited to SICOMP) + (FOCS 2022). [https://www.cse.iitk.ac.in/users/nitin/research.html]
Madhu Sudan: Low-Degree Testing
Abstract: Low-degree testing is the task of testing if a multivariate function given as a black box is (close to) a polynomial of low-degree in its input. The simplest case of this problem (for homogenous degree one polynomials) is the famed linearity testing problem of Blum, Luby and Rubinfeld, which already opened the door to a rich store of applications and tools. Extensions to higher degrees has similarly led to many applications (constructions of PCPs, small set expanders, algebraic XOR lemmas) and many tools. In this talk I will try to survey some of the flavors of this problem and reflect on some of the tools and applications.
Anamay Tengse: Annihilators of Polynomial Maps
Abstract: Suppose g_1, g_2, ..., g_n are m-variate polynomials of degree d. They naturally define a `polynomial map' from the m-dimensional space to the n-dimensional space. When m<n, there is always some nonzero n-variate `annihilator' polynomial A that vanishes on the entire image of this map: A(g_1, ..., g_n) is identically zero. As the gap between m and n grows larger, so does the space of annihilators. Thus, given a polynomial map G with m<<n, it is natural to ask what `the simplest' annihilator for G is.
As Hitting Set Generators (HSGs) are polynomial maps of this kind, the question is closely connected to PIT, and other hardness-randomness questions. Further, `algebraic natural proofs' can be seen as annihilators of (succinct) polynomial maps in a fairly natural way.
In this talk, we will first see why obtaining strong upper bounds (say VP, VNP) on annihilators of polynomial maps is a tricky problem. We will then see a weaker (yet not-so-trivial) upper bound on any `VP-explicit' map in terms of a well-studied class called VPSPACE, and look at its consequences on algebraic natural proofs and hardness-randomness connections.
This is a joint work with Prerona Chatterjee (Tel Aviv University).
Iddo Tzameret: The Algebraic Circuit-Based Approach to Proof Complexity: Where Are We and Where Are We Going?
Abstract: Proof complexity stands as one of the main approaches to the fundamental hardness problems in complexity, together with Boolean circuit complexity and algebraic complexity. Recent years have seen an effort to bridge the gap between algebraic and proof complexity, using a somewhat transparent reduction from algebraic circuit-size to proof-size lower bounds.
I shall discuss state-of-the-art proof complexity lower bounds using the algebraic circuit-based approach, which position this direction as a new general tool in proof complexity, along the few existing tools (feasible interpolation, random restrictions and width-size tradeoffs). While other lower bound tools in proof complexity suffer from formal or empirical barriers when targeting *strong* proof systems, the algebraic circuit-based approach has no such known barriers, and I will discuss some imminent open problems together with possible obstacles.
Ben Lee Volk: Extractors for Algebraic Sources
Abstract: Randomness extractors are tools for converting "low quality" randomness into "high quality" randomness. In addition to being useful in the areas of pseudorandomness and derandomization, these objects are also related to various fundamental notions in complexity theory and mathematics in general. In this talk we'll consider the randomness extraction problem from distributions with algebraic structure. We'll discuss the different types of algebraic sources and constructions, mention connections to algebraic complexity theory and present a recent construction of extractors for polynomial images of varieties
(Joint work with Zeyu Guo, Akhil Jalan and David Zuckerman)