We introduce and investigate the computational complexity of a novel physical problem, the Pinball Wizard problem. It involves an idealized pinball moving through a maze composed of one-way gates (outswing doors), plane walls, parabolic walls, moving plane walls, and bumpers that cause acceleration or deceleration. Given the pinball's initial position and velocity, determine whether it will hit a specified target point. By simulating a two-stack pushdown automaton, we show that the problem is Turing-complete – even in two-dimensional space. In our construction, each step of the automaton corresponds to a constant number of reflections. Thus, deciding the Pinball Wizard problem is at least as hard as the Halting problem. Furthermore, our construction allows bumpers to be replaced with moving walls. In this case, even a ball moving at constant speed – a so-called ray particle – can be used, demonstrating that the Ray Particle Tracing problem is also Turing-complete.
Boolean circuits provide a combinatorial representation of computation, where the number of gates (the size) represents running time, and the number of layers (the depth) captures parallel running time. They form a framework for answering fundamental questions such as P vs NP: proving that some NP problem requires circuits of superpolynomial size would separate P from NP. With limited progress on this question for general circuits, early breakthroughs focused on restricted circuit classes. Håstad (STOC `86) proved that constant-depth circuits with AND, OR, and NOT gates require exponential size to compute the parity function (which determines whether the sum of inputs is even or odd). Razborov (Matematicheskie Zametki `87) and, independently, Smolensky (STOC `87), extended this to circuits augmented with parity or modular gates (for prime moduli), showing that such circuits require exponential size to compute the majority function (which determines whether the sum of inputs is at least half their number). Following these foundational results, research has advanced along two principal directions, though further progress has become increasingly challenging with only a handful of breakthroughs (Williams `14, Murray and Williams '19). This talk presents some of Vaibhav's work that advances the frontier in both directions.
This talk surveys several basic ideas from algorithmic information theory and highlights a few recent developments. It begins with Kolmogorov complexity, which measures the amount of algorithmic information contained in a finite string. From this emerges a precise notion of randomness for infinite sequences: a sequence is random if all but finitely many of its prefixes are incompressible, meaning they contain essentially full information. The discussion then turns to a constructive dimension, which refines this notion by measuring the rate of randomness. Informally, this is obtained by considering the Kolmogorov complexity of longer and longer prefixes, normalizing by their length, and examining the limiting behavior. Finally, the talk examines the polynomial-time analogue of constructive dimension and a longstanding problem of robustness in this setting. Roughly speaking, the question is whether polynomial-time dimension can also be characterized in terms of rates of polynomial-time bounded Kolmogorov complexity. Somewhat surprisingly, this question is closely connected to the existence of one-way functions.