research

I study the theoretical aspects of computer science, and in particular complexity theory.

Theoretical computer science (TCS) is a scientific discipline born in the 20th century, with the goal of laying theoretical foundations for the computer-based technological revolution. The discipline studies questions that, following this revolution, now stand at the forefront of science. For example: How efficiently can problems be solved? How do limited resources impose constraints? And can things that appear real substitute the real thing? The methodology in TCS is to study these questions in full generality, and with mathemetical rigor.

Complexity theory is a core field within TCS, which focuses on the second question above - delineating the capabilities of algorithms whose resources are limited. As phrased by Oded Goldreich, this is a universal human question, since time and other resources are always in shortage. The field asks which tasks can be performed when we limit algorithms' time, memory, communication, randomness, and so on. In other words, instead of focusing on the complexity of a specific problem, the focus is on the effect of the resource constraint on the class of problems that can be solved. A hallmark question of complexity theory is the P vs NP problem.

My research tends to branch out of questions related to the role of randomness in computation. More specifically, I look at computer science through the lens of pseudorandomness and derandomization: The first is the challenge of efficiently generating objects that (provably) look random to all possible algorithms from certain classes of algorithms; and the second is the challenge of efficiently simulating randomized algorithms by deterministic algorithms. This perspective naturally leads to studying a broad array of theoretical areas, including circuit complexity, error-correcting codes, interactive proofs, cryptography, and more. 

The texts below describe some long-term projects I work on.

Randomness in computation

"The marriage of randomness and computation has been one of the most fertile ideas in computer science" (Avi Wigderson, Mathematics and Computation, 2019)

It is usually surprising for people outside computer science to learn that making random choices can be useful for algorithms. A helpful illustrative example due to David Zuckerman is sautéing onions: When doing so, people do not carefully plan a deterministic process of flipping onions in the pan; instead, we just randomly toss onion pieces around, hoping that in the end most pieces will be cooked more-or-less the same. A more general point is that objects chosen via random processes (such as random graphs, functions, or strings) tend to have properties that are useful for algorithms.

One way to think of it is by looking at the thought experiment of monkeys and typewriters differently:

We were wrong about the monkeys - they usually type Shakespeare.

A fundamental challenge in computer science and mathematics is to efficiently and deterministically construct objects with properties that are as good as the properties of random objects. And an even more fundamental and general challenge, which is explained below, is to efficiently simulate the random choices made by broad classes of probabilistic algorithms (indeed, algorithms that choose random objects are a special case).

Can we simulate randomness for algorithms?

The famous BPP = P conjecture asserts that a broad class of probabilistic algorithms can all be derandomized - simulated deterministically - without too much overhead. Specifically, it asserts that for any probabilistic polynomial-time algorithm that solves a decision problem (i.e., answers yes/no) or a verifiable search problem (i.e., solutions can be efficiently verified), we can deterministically generate pseudorandom choices that can be used to run the algorithm, with only a polynomial runtime overhead. Related conjectures refer to simulating randomness for algorithms that use little memory (e.g., the BPL = L question); and to simulating randomness for algorithms that run in parallel (e.g., the question of CAPP for NC1).

These questions are central in complexity theory, and they are inherently related to many other central questions in TCS. My research introduced new approaches for studying these questions, focused on the following directions:

For example contributions, see this survey, and the works [Chen-Tell 21], [Chen-Tell-Williams 23] and [Doron-Tell 23].

Pseudorandomness, interactive proof systems, and cryptography

Derandomization research is inherently connected to areas in TCS and in mathematics, such as algorithms, lower bounds, circuit complexity, cryptography, interactive proof systems, error-correcting codes, learning algorithms, randomness extractors, meta-complexity, and more.

I find it particularly interesting that derandomization is closely connected to cryptography and to interactive proof systems. This seems a-priori surprising, since both cryptography and interactive proof systems are focused on algorithms that (provably) must use randomness. However, the cross-influence of ideas, techniques, and results between the areas has been productive for decades, with specifically useful connections discovered in recent years.

Some questions in this context that I am currently interested in include:

For example contributions, see the works [Chen-Tell 21b], [Chen-Tell 23] and [Oliveira-Santhanam-Tell 19].

Algorithms and lower bounds

A major goal in complexity theory is proving lower bounds, or in other words proving that some computational problems are hard for all possible algorithms whose resources are reasonably constrained (e.g., limited time or memory). It is a surprising fact that we can prove such lower bounds - limitations for an entire class of algorithms - by designing one efficient algorithm that solves one of several (certain, specific) problems. 

The prime example of a problem algorithms for which imply lower bounds is derandomization. In fact, simulating randomness efficiently is a challenge that turns out to be inherently connected to lower bounds. My focus in this context has been on two directions:

Some example contributions include [Tell 17], [Tell 18], [Chen-Tell 19], [Tell 19], [Chen-Rothblum-Tell-Yogev 20], [Hatami-Hoza-Tal-Tell 21], [Hatami-Hoza-Tal-Tell 23], and [Doron-Pyne-Tell 24].

Probabilistic algorithms that err extremely rarely

Theoretical models of probabilistic algorithms usually allow them to err with some small probability, say 1/3. The precise error bound is usually perceived as insignificant, since we can efficiently reduce the error by running the algorithm several times and taking a majority vote (there are also more sophisticated methods for doing so).

However, the intuition that the error bound doesn't matter turned out in recent years to be inaccurate. When concerned with tiny errors, the precise choice of error bound matters a lot: In fact, it marks the difference between settings in which we can unconditionally derandomize all algorithms (with the given error bound), and settings in which derandomizing algorithms (with this error bound) is essentially equivalent to derandomizing all algorithms, defined with any reasonable bound (say, 1/3).

My research focuses, on the one hand, on efficient ways to reduce the error bound of probabilistic algorithms, making it extremely small; and, on the other hand, on derandomizing algorithms whose error probability is indeed extremely small. The study of probabilistic algorithms that err extremely rarely, called quantified derandomization, connects research in algorithms, complexity theory, and of mathematical objects called randomness extractors.

An exposition appears in this monograph.