The study of perfect matchings (aka 1-factors) in 2-connected 3-regular graphs dates back to the Four Color Problem (due to Tait’s equivalence established in 1880). Schonberger (1934) proved that, in any such graph, each edge belongs to at least one perfect matching; this fact is easily deduced using Tutte’s 1-factor Theorem (1947). We say that an edge is solitary if it belongs to precisely one perfect matching. Schonberger’s result naturally leads to the following problems in the context of 2-connected 3-regular graphs: (i) characterize solitary edges, and (ii) prove bounds on the number of solitary edges.
In a recent work, Goedgebeur, Mattiolo, Mazzuoccolo, Renders, and Wolf proved that, in any 3-connected 3-regular graph, there are at most six solitary edges, and they also provided a complete characterization of those that have three or more solitary edges.
In joint work with Narayana, Mattiolo, and Gohokar, we generalize their results, characterizations, and bounds to 3-edge-connected r-graphs. An r-graph, where r ≥ 3, is any r-regular graph with the additional property that each odd cut has at least r edges; when r = 3, this is precisely the class of 2-connected 3-regular graphs. Using Tutte’s 1-factor Theorem, one may easily generalize the aforementioned result of Sch ̈onberger to all r-graphs.
We make extensive use of the dependence relation introduced by CLM (Carvalho, Lucchesi, and Murty) in 1999, and this is precisely what allows us to obtain stronger results. In this talk, I'll talk about all of this, including the historical context mentioned earlier. Basic familiarity with graph theory is expected; nothing beyond that.
Groups generated by a single element are precisely the cyclic groups, and their structure is well understood. This naturally leads to the question: what can we say about groups generated by two elements (called 2-generated groups)? Although this appears to be a mild generalization, it quickly reaches the core of finite group theory. A fundamental result states that every finite simple group is 2-generated. Since simple groups are the building blocks of all finite groups, as primes are in integer factorization, this highlights that 2-generated groups form a large and significant class in finite group theory. Moreover, many classical, structurally diverse families, including symmetric, dihedral, and dicyclic groups, are 2-generated. These groups share this property, but their structural differences reveal the complexity of 2-generated groups. Unlike the cyclic case, a complete characterization is difficult. So understanding 2-generated groups is an important structural problem.
This naturally raises the question of whether a generation property can be detected purely from the group order. The notion of P-numbers provides a natural framework for approaching such questions. More precisely, given a group-theoretic property P, a positive integer n is called a P-number if every group of order n satisfies property P. For instance, cyclic numbers are those integers for which all groups of that order are cyclic. A well-known characterization of cyclic numbers states that a positive integer n is cyclic if and only if gcd(n, φ(n)) = 1, where φ(n) is Euler’s totient function.
Similar questions have been studied for properties such as being abelian, nilpotent, or satisfying the converse of Lagrange’s theorem (CLT), revealing a deep interplay between group theory and number theory. Motivated by this, we introduce and study 2-generated numbers: integers n for which two elements can generate every group of order n. In this talk, we present an arithmetic characterization of such integers in terms of their prime factorization. This provides a new perspective on how group-generation properties are encoded in the underlying number-theoretic data.
The Connectivity Augmentation Problem (CAP) is a classic problem in network design: how can we add the minimum number of edges to a graph to make it more resilient to failures? In the online version of this problem, requests for increased connectivity between pairs of vertices arrive one by one. For each request, the algorithm must immediately and irrevocably decide which edges (or "links") to add to ensure the network remains robust, all without knowing what future requests will look like. In this talk, we present several new results that improve our understanding of this problem. For the general online CAP, we introduce a deterministic algorithm that achieves an O(log n) competitive ratio. This result is optimal up to constant factors and improves significantly over previous randomized bounds, which were sensitive to the graph's initial connectivity. Additionally, we consider the weighted version of the problem and provide an O(log^2 n)- competitive algorithm. We also explore the Random-Order Model, in which requests arrive in a uniform random order rather than in worst-case order. We show a lower bound of Omega(log n) for this setting, proving that random arrival order does not actually make the problem easier to solve, even for the simplest case of trees.
This talk is based on joint work with Mohit Garg and appeared at SODA '26.
We consider the 2-Disjoint Shortest Paths (2-DSP) problem: given a directed weighted graph and two terminal pairs $(s_1,t_1)$ and $(s_2,t_2)$, decide whether there exist vertex-disjoint shortest paths between each pair.
Building on recent advances in disjoint shortest paths for DAGs and undirected graphs~(Akmal et al. 2024), we present an $O(mn \log n)$-time algorithm for this problem in weighted directed graphs that do not contain negative or zero weight cycles. This algorithm presents a significant improvement over the previously known $O(m^5n)$ time bound (Berczi et al., 2017). Our approach exploits the algebraic structure of polynomials that enumerate shortest paths between terminal pairs. A key insight is that these polynomials admit a recursive decomposition, enabling efficient evaluation via dynamic programming over fields of characteristic two.
In addition, we extend our techniques to a more general setting: given two terminal pairs $(s_1, t_1)$ and $(s_2, t_2)$ in a directed graph, find the minimum possible number of vertex intersections between any shortest path from $s_1$ to $t_1$ and $s_2$ to $t_2$. We call this the \emph{Minimum 2-Disjoint Shortest Paths (Min-2-DSP)} problem. In this paper, we provide the first efficient algorithm for this problem, including an $O(m^2 n^3)$-time algorithm for directed graphs with positive edge weights and an $O(m+n)$-time algorithm for DAGs and undirected graphs.
This is joint work with Keerti Choudhary and Amit Kumar, presented at ITCS 2026.
SETH (Strong Exponential Time Hypothesis) has been used to show fine-grained lower bounds (e.g., that the diameter of a graph cannot be determined in O(n^{2-\epsilon}) time). We will see a non-deterministic version of it (NSETH) and some of its consequences. En route, we will see a better-than-brute-force algorithm for Max2SAT and a better-than-brute-force non-deterministic algorithm for the complement of the 3-SUM problem.
We introduce and investigate the computational complexity of a novel physical problem, the Pinball Wizard problem. It involves an idealized pinball moving through a maze composed of one-way gates (outswing doors), plane walls, parabolic walls, moving plane walls, and bumpers that cause acceleration or deceleration. Given the pinball's initial position and velocity, determine whether it will hit a specified target point. By simulating a two-stack pushdown automaton, we show that the problem is Turing-complete – even in two-dimensional space. In our construction, each step of the automaton corresponds to a constant number of reflections. Thus, deciding the Pinball Wizard problem is at least as hard as the Halting problem. Furthermore, our construction allows bumpers to be replaced with moving walls. In this case, even a ball moving at constant speed – a so-called ray particle – can be used, demonstrating that the Ray Particle Tracing problem is also Turing-complete.
Boolean circuits provide a combinatorial representation of computation, where the number of gates (the size) represents running time, and the number of layers (the depth) captures parallel running time. They form a framework for answering fundamental questions such as P vs NP: proving that some NP problem requires circuits of superpolynomial size would separate P from NP. With limited progress on this question for general circuits, early breakthroughs focused on restricted circuit classes. Håstad (STOC `86) proved that constant-depth circuits with AND, OR, and NOT gates require exponential size to compute the parity function (which determines whether the sum of inputs is even or odd). Razborov (Matematicheskie Zametki `87) and, independently, Smolensky (STOC `87), extended this to circuits augmented with parity or modular gates (for prime moduli), showing that such circuits require exponential size to compute the majority function (which determines whether the sum of inputs is at least half their number). Following these foundational results, research has advanced along two principal directions, though further progress has become increasingly challenging with only a handful of breakthroughs (Williams `14, Murray and Williams '19). This talk presents some of Vaibhav's work that advances the frontier in both directions.
This talk surveys several basic ideas from algorithmic information theory and highlights a few recent developments. It begins with Kolmogorov complexity, which measures the amount of algorithmic information contained in a finite string. From this emerges a precise notion of randomness for infinite sequences: a sequence is random if all but finitely many of its prefixes are incompressible, meaning they contain essentially full information. The discussion then turns to a constructive dimension, which refines this notion by measuring the rate of randomness. Informally, this is obtained by considering the Kolmogorov complexity of longer and longer prefixes, normalizing by their length, and examining the limiting behavior. Finally, the talk examines the polynomial-time analogue of constructive dimension and a longstanding problem of robustness in this setting. Roughly speaking, the question is whether polynomial-time dimension can also be characterized in terms of rates of polynomial-time bounded Kolmogorov complexity. Somewhat surprisingly, this question is closely connected to the existence of one-way functions.