New Directions in Derandomization

FOCS 2022 Workshop

In the last three years there's been fast-paced progress in the area of derandomization: Upending a classical paradigm, opening new directions of study, and discovering exciting connections with interactive proof systems, cryptography, PCPs, data structures, and more. 

The purpose of the workshop is to expose broader circles of researchers and students to these developments, and to strengthen the connections between researchers in the area and the broader community. 

The workshop is structured as a self-contained mini-course that focuses on the new developments. You won't need expert knowledge to follow, just basic complexity theory. Looking forward to seeing you there! 

Lijie Chen and Roei Tell

UPDATE (Feb 2023): The videos from the workshop are now available online! See links below to four videos, corresponding to each of the four sessions.

Monday Oct 31 🎃 

11:00 - 12:00

Background, and the new paradigm of non-black-box derandomization 

Lijie Chen

The classical "hardness vs randomness" line of works showed that, under circuit lower bound assumptions, derandomization with polynomial time overhead is possible. However, the following questions are still unanswered by these classical works: 


In this talk, I will first survey some background on the classical hardness vs randomness paradigm. Then I will give a quick overview of how recent works made significant progress on the two questions above. Finally, I will introduce the recent paradigm of non-black-box derandomization, which leads to progress on both questions.


Based on joint works with Roei Tell

Monday Oct 31 🎃 

12:00 - 12:30

Equivalences between derandomization and other major questions in TCS, Part I

Yanyi Liu

A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. In this talk, we show that this question is closely related to another central question in complexity theory - the hardness of approximating conditional Levin-Kolmogorov complexity. Let Kt(x|z) denote the Levin-Kolmogorov complexity of x conditioned on z. We show that for all sufficiently large constants c, the following are equivalent:


Joint work with Rafael Pass.

Monday Oct 31 🎃 

16:00 - 16:30

Equivalences between derandomization and other major questions in TCS, Part II

Yanyi Liu

The celebrated “hardness v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84), Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness assumptions under which prBPP = prP, but to what extent the framework is inherent for derandomization remains widely open. Following the recent work by Chen and Tell (FOCS’21) that considers “almost-all-input” hardness of a function f (i.e., hardness of computing f on more than a finite number of inputs), we consider “almost-all-input” leakage-resilient hardness of a function f—that is, hardness of computing f(x) even given, say, sqrt(|x|) bits of leakage of f(x). We show that leakage-resilient hardness characterizes derandomization of prBPP (i.e., gives a both necessary and sufficient condition for derandomization). In more detail, we show that there exists a constant c such that the following are equivalent:


Joint work with Rafael Pass.

Monday Oct 31 🎃 

16:30 - 17:30

Superfast and “free lunch” derandomization of BPP 

Dean Doron

Classical proofs of BPP = P from circuit lower bounds convert randomized algorithms into deterministic ones with a large polynomial slowdown. Namely, a randomized algorithm that on inputs of length n runs in time T ≥ n is converted into a deterministic one running in time T^c where c is a large constant. The large slowdown is an inherent barrier in classical techniques.

How small can we take the constant c to be? In the last couple of years, exciting progress has been made in achieving *super-fast* derandomization. Under seemingly stronger hardness assumptions, we can achieve a runtime of T^2 and even T*n. If we relax the worst-case guarantee to average-case derandomization, we can even get arbitrary close to T! In this talk we will overview those results and highlight some of the new techniques, and pseudorandomness primitives, involved in achieving them. We will also mention matching lower bounds and barriers.  

The talk will be based on works by Doron, Moshkovitz, Oh, and Zuckerman, by Chen and Tell, and by Shaltiel and Viola.

Tuesday Nov 1 

11:00 - 12:00

Superfast and “free lunch” derandomization of interactive proof systems 

Roei Tell

The lifeblood of interactive proof systems is randomness, without which interaction becomes redundant. Thus, a natural long-standing question is which types of proof systems can indeed be derandomized and collapsed to a single-message NP-type protocol. Classical results tell us that, under strong circuit lower bounds, we can do so for proof systems with constantly many rounds of interaction, while incurring a polynomial time overhead (i.e., the deterministic verifier is polynomially slower than the original verifier in the proof system). 

We will see how derandomization of such proof systems can be achieved with optimal time overhead, under strong hardness assumptions. Moreover, we'll show that we can do so with essentially no time overhead at all, if one is willing to compromise on obtaining a system that is secure only against inputs and proofs that can be found efficiently. Along the way, we will introduce a new and natural model of computation, namely argument systems that are secure "on average".

As a corollary, under strong but plausible assumptions, there exists an argument system for #SAT in which the verifier runs in time 2^{εn}, where ε >0 may be arbitrarily small, and is secure against adversaries running in exponential time 2^{O(n)}. (A common conjecture is that there doesn't exist a verifier with such running time that is secure against all-powerful adversaries.)

The talk will be based on a joint work with Lijie Chen.

Tuesday Nov 1 

12:00 - 12:30

Non-black-box derandomization of Arthur-Merlin protocols 

Nicollas Sdroievski 

A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small time overhead (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small time overhead (the AM vs. NP problem). In both settings, a definite answer still seems like a distant goal, but a long line of work has developed a range of reasonable hardness assumptions and shown that they are sufficient for derandomization, and in some cases equivalent to derandomization. Depending on the strength of the hardness assumption, the efficiency of the derandomization scales from subexponential (low-end) down to polynomial (high-end).  

Until recently, results in the area either used non-uniform hardness conditions (such as EXP not in P/poly) to obtain simulations that work correctly on all inputs of a given length, or else uniform hardness assumptions (such as EXP not in BPP) to obtain simulations that work correctly on most inputs of a given length. Recent progress on the BPP vs. P problem has introduced a novel technique that enables the hardness-derandomization connection to work on an input-by-input basis using uniform hardness assumptions. We develop a similar technique for the AM vs. NP problem and present results along the following line: To derandomize an AM protocol on a particular input x, it suffices for a certain uniform hardness condition to hold on input x. In particular, to obtain derandomizations that work on all inputs, it suffices for the uniform hardness condition to hold on all inputs. We also show that a hardness assumption of the latter type is necessary for a high-end derandomization of AM.

Tuesday Nov 1 

16:00 - 16:30

The Range Avoidance Problem, Part I

Oliver Korten

In this talk we survey some recent work on a family of total search problems related to the weak pigeonhole principle. Chief among these problems is the “Range Avoidance problem:” given a circuit mapping n bits to n+1 bits, find some n+1-bit string outside its range. We will see that Range Avoidance allows us to tightly characterize the complexity of constructing hard boolean functions, and more broadly captures the the complexity of derandomizing constructions that use the probabilistic method. In addition, we will see that one “cousin” of the Range Avoidance problem admits a new kind of hardness-randomness connection, and another of its “cousins” yields a new characterization of promise-BPP.

Tuesday Nov 1 

16:30 - 17:00

The Range Avoidance Problem, Part II

Hanlin Ren

Proving circuit lower bounds is an important and difficult special case of the range avoidance problem. Thus, it is no surprise that progress on circuit lower bounds also leads to progress on the range avoidance problem. In this talk, I'll briefly discuss how recent advances in circuit lower bounds, in particular, Williams's "Algorithmic Method", lead to similar advances in the range avoidance problem. We hope these techniques inspire more algorithmic results on the range avoidance problem.

Tuesday Nov 1 

17:00 - 17:30

Open Problems Session!

Lijie & Roei

From grand open problems to potentially tractable leads, this session will present interesting directions for anyone who's curious about the new ways of studying derandomization. Join the party!