Day 1
Registration 8:00 - 9:15
9:00 - 10:00
BCR relaxation for Steiner tree and Steiner forest
Jarosław (Jarek) Byrka (University of Wrocław)
The Steiner tree problem is one of the most prominent problems in network design. Given an edge-weighted undirected graph and a subset of the vertices, called terminals, the task is to compute a minimum-weight tree containing all terminals (and possibly further vertices). The best-known approximation algorithms for Steiner tree involve enumeration of a (polynomial but) very large number of candidate components and are therefore slow in practice.
A promising ingredient for the design of fast and accurate approximation algorithms for Steiner tree is the bidirected cut relaxation (BCR): bidirect all edges, choose an arbitrary terminal as a root, and enforce that each cut containing some terminal but not the root has one unit of fractional edges leaving it. BCR is known to be integral in the spanning tree case [Edmonds'67], i.e., when all the vertices are terminals. For general instances, however, it was not even known whether the integrality gap of BCR is better than the integrality gap of the natural undirected relaxation, which is exactly 2. We resolve this question by proving an upper bound of 1.9988 on the integrality gap of BCR.
In the second part of the talk I will discuss an LP relaxation for Steiner Forest that generalizes BCR for Steiner Tree. We prove that this relaxation has several promising properties. Among them, it is possible to round any half-integral LP solution to a Steiner Forest instance while increasing the cost by at most a factor 16/9. To prove this result we introduce a novel recursive densest-subgraph contraction algorithm.
Coffee Break 10:00 - 10:30
10:30 - 12:00
A Framework for k-Median and k-Means Approximation
Fabrizio Grandoni (IDSIA, USI-SUPSI)
We present a recent framework that leads to the current best approximation algorithms for k-median and k-means.
The connected min-sum radii problem
Hyung-Chan An (Yonsei University)
In this talk, we consider the study for the connected minimum sum of radii problem. In this problem, we are given as input a metric defined on a set of facilities and clients, along with some cost parameters. The objective is to open a subset of facilities, assign every client to an open facilitiy, and connect open facilities using a Steiner tree so that the weighted (by cost parameters) sum of the maximum assignment distance of each facility and the Steiner tree cost is minimized. This problem introduces the min-sum radii objective, an objective function that is widely considered in the clustering literature, to the connected facility location problem, a well-studied network design/clustering problem. This problem is useful in communication network design on a shared medium, or energy optimization of mobile wireless chargers.
Joint work with Mong-Jen Kao.
Submodular k-Partitioning via Principal Partition Sequence
Karthik Chandrasekaran (UIUC)
In the submodular k-partitioning problem, the input is a submodular function along with an integer k. The goal is to partition the ground set into k non-empty parts in order to minimize the sum of the function value of the parts. Several partitioning/clustering problems over graphs, hypergraphs, and matroids can be cast as special cases of submodular k-partition, and consequently, the approximability of the problem for three families of submodular functions is of particular interest – symmetric, monotone, and posimodular submodular functions. In this talk, I will outline the principal partition sequence-based approach to approximating submodular k-partitioning for these three families. A generalization of the problem---termed {s,t}-separating submodular k-partition---requires the k-partition to also separate two specified elements s and t. I will sketch the theory of {s,t}-separating principal partition sequence, and its application to approximating {s,t}-separating submodular k-partition.
Lunch (on your own) 12:00 - 13:30
13:30 - 14:30
A Polylogarithmic Approximation for Buy-at-Bulk with Protection
Rhea Jain (UIUC)
The nonuniform multicommodity buy-at-bulk problem is a classical and practical problem in network design, where the goal is to construct a low-cost network that supports multicommodity flow between node pairs. The cost of purchasing capacity on an edge is given by a sub-additive function, thus capturing settings that exhibit economies of scale. In the fault-tolerant setting, the goal is to protect against node or edge failures by sending demand flow along disjoint paths. Despite substantial work addressing various special cases, no nontrivial approximation algorithm was known for fault-tolerant nonuniform buy-at-bulk, even for protecting against a single edge failure (k=2)!
In this talk, I will present recent work breaking this long standing barrier, that yields the first polylogarithmic approximation algorithm for fault-tolerant nonuniform buy-at-bulk that protects against a single node failure.
A Better-Than-5/4 Approximation for Two-Edge Connectivity
Alex Lindermayr (Simons Institute)
The $2$-Edge-Connected Spanning Subgraph problem (2ECSS) is among the most basic survivable network design problems: given an undirected and unweighted graph, the task is to find a spanning subgraph with the minimum number of edges that is 2-edge-connected (i.e., it remains connected after the removal of any single edge). 2ECSS is an NP-hard problem that has been extensively studied in the context of approximation algorithms. The best-known approximation ratio for 2ECSS prior to our work was $1.3+\varepsilon$, for any constant $\varepsilon>0$ [Garg, Grandoni, Jabal-Ameli'23; Kobayashi, Noguchi'23]. We first present a $5/4$-approximation algorithm, and also discuss how we can improve it slightly to $5/4 - \eta$ for some constant $\eta \geq 10^{-6}$.
A Better-Than-2 Approximation for the Directed Tree Augmentation Problem [arxiv]
Olha Silina (CMU)
We introduce and study a directed analogue of the weighted Tree Augmentation Problem (WTAP). In the weighted Directed Tree Augmentation Problem (WDTAP), we are given an oriented tree T and a set of directed links L with positive costs. The goal is to select a minimum cost set of links which enters each fundamental dicut of T (cuts with one leaving and no entering tree arc). WDTAP captures the problem of covering a cross-free set family with directed links. WDTAP can be approximated to within a factor of 2 using standard techniques. We provide an improved (1.75+\eps)-approximation algorithm for WDTAP in the case where the links have bounded costs, a setting that has received significant attention for WTAP. Joint work with Meike Neuwohner and Michael Zlatin.
Online Resource Allocation with Concave, Diminishing-Returns Objectives
Kalen Patton (Georgia Tech)
We examine a broad class of fractional online resource allocation problems, where we only assume our objective is monotone, concave, and DR-submodular. This framework captures fractional bipartite matching (for which (1-1/e)-competitive hardness is known) as well as many settings for which an optimal (1-1/e)-competitive algorithm exists, including AdWords (MSVV'05), matching with concave rewards (DJ'12), whole page optimization (DHKMY'13), and online submodular assignment (HJPSZ'24). Surprisingly, we show that our small set of assumptions is already sufficient to obtain the optimal 1-1/e competitive ratio, thereby unifying and generalizing these prior results. Our analysis uses new primal-dual tools which may be of independent interest.
Optimal Online Bipartite Matching in Degree-2 Graphs
Arghya Chakraborty (TIFR)
Online bipartite matching is a classical problem in online algorithms and we know that both the deterministic fractional and randomized integral online matchings achieve the same competitive ratio of 1-1/e. In this work, we study classes of graphs where the online degree is restricted to 2. As expected, one can achieve a competitive ratios of better than 1-1/e in both the deterministic fractional and randomized integral cases, but surprisingly, the competitive ratios are not the same. It was already known that for fractional matching, a 0.75 competitive ratio algorithm is optimal. We show that the folklore Half-Half algorithm achieves a competitive ratio of ~0.717772 and more surprisingly, show that this is optimal by giving a matching lower-bound. This yields a separation between the two problems: deterministic fractional and randomized integral.
Weighted k-server problem admits an exponentially-competitive algorithm [arxiv]
Adithya Bijoy (NUS)
In this talk, I will explain the recent progress of the weighted k-server problem.
Coffee + Gelato Break 14:30 - 15:30
15:30 - 16:30
Optimal Stopping with a Predicted Prior [arxiv]
Zhiyi Huang (HKU)
There are two major models of value uncertainty in the optimal stopping literature: the secretary model, which assumes no prior knowledge, and the prophet inequality model, which assumes full information about value distributions. In practice, decision makers often rely on machine-learned priors that may be erroneous. Motivated by this gap, we formulate the model of optimal stopping with a predicted prior to design algorithms that are both consistent, exploiting the prediction when accurate, and robust, retaining worst-case guarantees when it is not.
Existing secretary and prophet inequality algorithms are either pessimistic in consistency or not robust to misprediction. A randomized combination only interpolates their guarantees linearly. We show that a family of bi-criteria algorithms achieves improved consistency-robustness trade-offs, both for maximizing the expected accepted value and for maximizing the probability of accepting the maximum value. We further prove that for the latter objective, no algorithm can simultaneously match the best prophet inequality algorithm in consistency, and the best secretary algorithm in robustness.
The forbidden pattern method for graph sparsification
Greg Bodwin (University of Michigan)
In theoretical computer science, one often wants to either (1) sparsify a graph while approximating its important structural properties (spanners, preservers, cut/flow/spectral sparsifiers, etc), or (2) edit a few edges of a graph while improving its important structural properties (hopsets, shortcut sets, low-diameter decompositions, etc). We will survey the "forbidden pattern method," a recently popular approach to these problems, which works by reducing to extremal Turán-type problems about forbidden patterns in some combinatorial structure.
16:30 - 17:30
Edmonds++: Efficiently Coloring More Matroid Intersections [arxiv]
Michael Zlatin (Pomona College)
Edmonds’ classical augmenting-path algorithm for matroid coloring shows us how to partition the elements of a single matroid into the fewest number of independent sets. But for matroid intersection coloring, no better than O(log n)-approximation is known in general. We give the first constant-factor approximation for coloring the intersection of a general matroid with one or more partition matroids. Leveraging the fact that many standard combinatorial matroids (graphic, laminar, transversal, gammoid) can be reduced to partition matroids at a loss of a factor of two in the chromatic number, we obtain constant approximations for all of these cases as well.
Unsplittable Multicommodity Flow in Undirected Graphs
Nikhil Kumar (TIFR)
We study multicommodity flows in undirected graphs, where we are given a supply graph G with edge capacities and a demand graph H that specifies source-sink pairs and their demands. An instance is feasible if all demands can be routed without violating the edge capacities. In the unsplittable version, each commodity must be routed along a single path. We investigate instances where the existence of a feasible (splittable) flow guarantees an unsplittable flow with only small capacity violations. We focus on instances where feasibility admits a natural characterization, i.e. the cut-condition: for every cut, the total demand crossing the cut is at most the total capacity of the edges crossing it. We study instances for which the cut-condition is sufficient, and we show that, for such cut-sufficient instances, there exists a polynomial-time–computable unsplittable flow that violates edge capacities only by a small amount. These are the first results of this kind for unsplittable multicommodity flows since the seminal work of Seymour, Schrijver, and Winkler on rings.
On the Capacitated Facility Location Problem and Multicommodity Flow Network Relaxation
Mong-Jen Kao (National Yang-Ming Chiao-Tung University)
The study of classic covering and location problems under the concept of limited capacity supply has emerged in the past two decades and drawn a wide attention with a sequence of algorithmic results. In this talk I will introduce the capacitated facility location problem and relative algorithmic results.
Approximating the Held-Karp Bound for Metric TSP in Nearly Linear Work and Polylogarithmic Depth [arxiv]
Sorrachai Yingchareonthawornchai (ETH Zurich)
We present a nearly linear work parallel algorithm for approximating the Held-Karp bound for the Metric TSP problem. Given an edge-weighted undirected graph G=(V,E) on m edges and ϵ>0, it returns a (1+ϵ)-approximation to the Held-Karp bound with high probability, in Õ(m/ϵ^4) work and Õ(1/ϵ^4) depth. While a nearly linear time sequential algorithm was known for almost a decade (Chekuri and Quanrud'17), it was not known how to simultaneously achieve nearly linear work alongside polylogarithmic depth. Using a reduction by Chalermsook et al.'22, we also give a parallel algorithm for computing a (1+ϵ)-approximate fractional solution to the k-edge-connected spanning subgraph (kECSS) problem, with similar complexity.
To obtain these results, we introduce a notion of core-sequences for the parallel Multiplicative Weights Update (MWU) framework (Luby-Nisan'93, Young'01). For the Metric TSP and kECSS problems, core-sequences enable us to exploit the structure of approximate minimum cuts to reduce the cost per iteration and/or the number of iterations. The acceleration technique via core-sequences is generic and of independent interest. In particular, it improves the best-known iteration complexity of MWU algorithms for packing/covering LPs from polylog(nnz(A)) to polylogarithmic in the product of cardinalities of the core-sequence sets, where A is the constraint matrix of the LP. For certain implicitly defined LPs such as the kECSS LP, this yields an exponential improvement in depth.
18:30 - 21:00
Day 2
9:00 - 10:00
Breaking a Long-Standing Barrier: 2–ε Approximation for Steiner Forest
MohammadTaghi Hajiaghayi (University of Maryland)
The Steiner Forest problem, also known as the Generalized Steiner Tree problem, is a fundamental optimization problem on edge-weighted graphs where, given a set of vertex pairs, the goal is to select a minimum-cost subgraph such that each pair is connected. This problem generalizes the Steiner Tree problem, first introduced in 1811, for which the best known approximation factor is 1.39 by [Byrka, Grandoni, Rothvoß, and Sanità, 2010].
The celebrated work of [Agrawal, Klein, and Ravi, 1989], along with refinements by [Goemans and Williamson, 1992], established a 2-approximation for Steiner Forest over 35 years ago. Despite the long-standing importance of this problem, breaking the approximation factor of 2 has remained a major challenge. In this talk, I present a novel deterministic algorithm that breaks the approximation barrier of 2 for this fundamental problem. As a key component of our approach, we also introduce a novel dual-based local search algorithm for the Steiner Tree problem with an approximation guarantee of 1.943, which is of independent interest.
Break 10:00 - 10:30
10:30 - 12:00
Sequential Testing with Subadditive Costs
Viswanath Nagarajan (University of Michigan)
In the classic sequential testing problem, we are given a system with several components each of which fails with some independent probability. The goal is to identify whether or not some component has failed. When the test costs are additive, it is well known that a greedy algorithm finds an optimal solution. We consider a much more general setting with subadditive cost functions and provide a (4ρ+γ)-approximation algorithm, assuming a γ-approximate value oracle (that computes the cost of any subset) and a ρ-approximate ratio oracle (that finds a subset with minimum ratio of cost to failure probability). While the natural greedy algorithm has a poor approximation ratio in the subadditive case, we show that a suitable truncation achieves the above guarantee.
Games of combinatorial optimization
Yuval Rabani (Hebrew University of Jerusalem)
Six decades ago Lehman and Edmonds developed a remarkable theory which, among other consequences for the later development of matroid theory, demonstrated a class of two-player alternating-turn combinatorial games, in which winning moves can be computed in polynomial time. The original example of such a game is the Shannon switching game. This existing theory resolves only the win-lose binary alternative. But, as with most matroid questions, once we endow the ground set of the matroid with a weight function, the decision question generalizes to an optimization problem. In the game-theoretic setting, this leads to a zero-sum game with numerical payoff, rather than merely a win-lose outcome. We shall discuss problems of this flavor and we will present some results and some open questions.
Almost Tight Additive Guarantees for k-Edge Connectivity
Chaitanya Swamy (University of Waterloo)
We consider the k-edge connected spanning subgraph} (kECSS) problem, where we are given an undirected graph with nonnegative edge costs, and we seek a minimum-cost subgraph that is k-edge connected, i.e., there are k edge-disjoint paths between every pair of nodes.
For even k, we present a polytime algorithm that computes a (k-2)-edge connected subgraph of cost at most the optimal value of the natural LP-relaxation for kECSS; for odd k, this leads to a (k-3)-edge connected subgraph of cost at most the LP optimum. Since kECSS is APX-hard for all k\geq 2, the tightest connectivity guarantee one can hope for without exceeding the optimal value is (k-1)-edge connectivity, so our results are nearly tight. They also significantly improve upon the recent work of Hershkowitz et al., both in terms of solution quality and the simplicity of algorithm and its analysis. Our result is obtained using iterative rounding, with additional insights involving uncrossing tight sets for kECSS. A slight variant of our approach yields a (k-1)-edge connected subgraph of cost at most 1.5 times the LP optimum; with unit edge costs, the cost guarantee improves to (1+4/3k) times the LP optimum, which improves upon the state-of-the-art approximation for unit edge costs, but with a unit loss in edge connectivity. The kECSS-result also yields results for the k-edge connected spanning multigraph (kECSM) problem, where multiple copies of an edge can be selected. Our techniques extend to the degree-bounded versions of kECSS and kECSM, wherein we also impose degree lower- and upper- bounds on the nodes.
Joint work with Nikhil Kumar.
Lunch (on your own) 12:00 - 14:00
14:00 - 14:30
Break 14:30 - 15:00
15:00 - 15:30
On the Approximability of Directed Steiner Tree
Bundit Laekhanukit
The Directed Steiner Tree (DST) problem is a notoriously hard problem defined by a major complexity gap in approximation: quasi-polynomial time algorithms achieve a polylogarithmic $O(\log^2 k/\log\log k)$ approximation, yet no sub-polynomial approximation is known in polynomial time. This talk addresses the boundary of this gap. We review crucial evidence, both public and folklore, that supports and refutes the possibility of a polynomial-time polylogarithmic approximation for DST.
15:30 - 16:40
Fair Transit Stop Placement: a Clustering Perspective and Beyond [arxiv]
Yuhang Guo (UNSW)
We study the Transit Stop Placement problem (TrSP) in general metric spaces, where each agent travels from a source to a destination and a planner must place shuttle transit stops to minimize agents' travel costs. Each agent may either walk directly or utilize the shuttle service via designated stops. We focus on incorporating fairness into this setting by examining two key notions: Justified Representation (JR) and the core. Our first contribution establishes a formal connection between fair TrSP and fair clustering. Specifically, we show that any $\rho$-Proportional Fair (PF) clustering algorithm yields a $(2, \rho)$-core solution for the corresponding TrSP instance, and conversely, any algorithm achieving $\beta$-JR in TrSP implies a $2\beta$-PF solution in the associated clustering problem. Next, we prove that JR may not exist in general metric spaces, implying that the core can also be empty. We establish a lower bound of approximately $1.366$ on the approximation of JR. We analyze the Greedy Capture for TrSP (\GCT) algorithm and show it satisfies $4.236$-JR and $(2, 2.414)$-core. To improve JR approximation, we propose the Expanding Cost Algorithm (ECA), which achieves a tight $2.414$-JR guarantee but fails to approximate the core for any bounded parameters. To balance both fairness notions, we introduce the $\lambda$-Hybrid algorithm, which integrates \GCT and ECA via a tunable parameter $\lambda$. This approach yields a trade-off between JR and core approximation, offering a flexible framework for fair transit stop placement in general metric space.
Nearly Tight Regret Bounds for Profit Maximization in Bilateral Trade [arxiv]
Simone di Gregorio (Sapienza University of Rome)
Bilateral trade models the task of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. We study this problem from the perspective of a broker, in a regret minimization framework. At each time step, a new seller and buyer arrive, and the broker has to propose a mechanism that is incentive-compatible and individually rational, with the goal of maximizing profit.
A new algorithm analysis framework: By the book analysis [arxiv]
Eleon Bach (TU Munich)
Narrowing the gap between theory and practice is a longstanding goal of the algorithm analysis community. To further progress our understanding of how algorithms work in practice, we propose a new algorithm analysis framework that we call by the book analysis. In contrast to earlier frameworks, by the book analysis not only models an algorithm's input data, but also the algorithm itself. Results from by the book analysis are meant to correspond well with established knowledge of an algorithm's practical behavior, as they are meant to be grounded in observations from implementations, input modeling best practices, and measurements on practical benchmark instances. The simplex method similarly showcased the state of the art framework smoothed analysis (Spielman and Teng, STOC'01). We prove that under input scaling assumptions, feasibility tolerances and other design principles used by simplex method implementations, the simplex method indeed attains a polynomial running time. This is joint work with Alexander Black, Sophie Huiberts and Sean Kafer.
Minimum Cost Nowhere-Zero Flows and Cut-Balanced Orientations
Siyue Liu (CMU)
Flows and colorings are disparate concepts in graph algorithms---the former is tractable while the latter is intractable. Tutte introduced the concept of nowhere-zero flows to unify these two concepts. Jaeger showed that nowhere-zero flows are equivalent to cut-balanced orientations. Motivated by connections between nowhere-zero flows, cut-balanced orientations, Nash-Williams' well-balanced orientations, and postman problems, we study optimization versions of nowhere-zero flows and cut-balanced orientations. Given a bidirected graph with asymmetric costs on two orientations of each edge, we study the min cost nowhere-zero k-flow problem and min cost k-cut-balanced orientation problem. We show that both problems are NP-hard to approximate within any finite factor. Given the strong inapproximability result, we design bicriteria approximations for both problems. For the case of symmetric costs (where the costs of both orientations are the same for every edge), we show that the nowhere-zero k-flow problem remains NP-hard and admits a 3-approximation. Joint work with Karthekeyan Chandrasekaran (UIUC) and R. Ravi (CMU).
Maximum Matching in $\Theta(\log \log n)$ Passes in Dynamic Streams [arxiv]
Janani Sundaresan (University of Waterloo)
In the dynamic streaming model, an $n$-vertex input graph is defined through a sequence of edge insertions and deletions in a stream. The algorithms are allowed to process this stream in multiple passes while using O(n \poly\log{(n)}) space. This model has been introduced in the seminal work of Ahn, Guha, and McGregor (AGM) in 2012 and has been studied extensively since.
In this talk, we focus on approximating maximum matching in the dynamic stream model. An $O(1)$-approximation algorithm in $O(\log n)$ passes was already introduced by [AGM12], but improving the number of passes has remained elusive. We give a randomized sketching based algorithm that achieves an $O(1)$-approximation in only $O(\log \log n)$-passes and $O(n \poly \log n)$ space, which exponentially improves the state-of-art. Using standard techniques, the approximation ratio of this algorithm can be improved to (1+eps)-approximation for any constant eps > 0 with asymptotically the same space and number of passes.
Furthermore, we prove the first multi-pass lower bound for this problem, showing that $\Omega(\log \log n)$ passes are also necessary for any algorithm which finds an $O(1)$-approximation to maximum matching in $O(n \poly \log n)$ space.
Our upper and lower bounds collectively settle the pass complexity of this fundamental problem in the dynamic streaming model. This is joint work with Sepehr Assadi, Soheil Behnezhad, Christian Konrad and Kheeran K. Naidu.
Fast Nearest Neighbor Search for $\ell_p$ Metrics
Nir Petruschka (Weizmann Institute)
The Nearest Neighbor Search (NNS) problem asks to design a data structure that preprocesses an $n$-point dataset $X$ lying in a metric space $\mathcal{M}$, so that given a query point $q \in \mathcal{M}$, one can quickly return a point of $X$ minimizing the distance to $q$. The efficiency of such a data structure is evaluated primarily by the amount of space it uses and the time required to answer a query.
We focus on the fast query-time regime, which is crucial for modern large-scale applications, where datasets are massive and queries must be processed online, and is often modeled by query time $\text{poly}(d \log n)$.
Our main result is such a randomized data structure for NNS in $\ell_p$ spaces, $p>2$, that achieves $p^{O(1) + \log\log p}$ approximation with fast query time and $\text{poly}(dn)$ space. Our data structure improves, or is incomparable to, the state-of-the-art for the fast query-time regime from [Bartal and Gottlieb, TCS 2019] and [Krauthgamer, Petruschka and Sapir, FOCS 2025].
Fully Dynamic k-Median with Near-Optimal Update Time and Recourse [arxiv, video]
Ermiya Farokhnejad (University of Warwick)
In metric $k$-clustering, we are given as input a set of $n$ points in a general metric space, and we have to pick $k$ {\em centers} and {\em cluster} the input points around these chosen centers, so as to minimize an appropriate objective function. In recent years, significant effort has been devoted to the study of metric $k$-clustering problems in a {\em dynamic setting}, where the input keeps changing via updates (point insertions/deletions), and we have to maintain a good clustering throughout these updates. The performance of such a dynamic algorithm is measured in terms of three parameters: (i) Approximation ratio, which signifies the quality of the maintained solution, (ii) Recourse, which signifies how stable the maintained solution is, and (iii) Update time, which signifies the efficiency of the algorithm.