Abstract
The Steiner Tree Problem (STP) is a well-known NP-hard problem in graph theory that aims to f ind the minimum-weight tree spanning a specified subset of vertices, referred to as terminals. This thesis presents a comprehensive analysis of the sequential implementation of the Kou, Markowsky, and Berman (KMB) algorithm, a prominent polynomial-time approximation method recognized for its effectiveness in addressing the STP. We provide a detailed exploration of the KMB algorithm’s three-phase process. First, the shortest paths between all pairs of terminal nodes are computed, serving as the foundation for the subsequent steps. Next, a minimum spanning tree (MST) is constructed using the computed shortest paths, ensuring that the tree connects all terminals with minimal total edge weight. Finally, the edges of the MST are substituted with the corresponding shortest paths in the original graph to yield a Steiner tree that approximates the optimal solution. Our sequential implementation is rigorously evaluated against a variety of benchmark datasets, including both small- and large-scale instances, demonstrating its effectiveness and efficiency in producing high-quality solutions within reasonable time constraints. We also address the inherent computational complexity associated with the KMB algorithm, particularly when applied to large graphs, and identify key bottlenecks that may hinder performance. Through this analysis, we enhance the understanding of the KMB algorithm’s performance in a sequential context, highlighting its practical applicability in solving the Steiner Tree Problem in real-world scenarios such as network design and VLSI layout. Ultimately, this thesis aims to contribute to the development of efficient solutions for the STP by providing a robust sequential approach that balances solution quality with computational efficiency, paving the way for further advancements in this critical area of graph theory
Greedy Algorithms and Matroids
I will introduce the concept of matroids, a powerful combinatorial structure that generalizes the notion of independence beyond linear al gebra. Starting with intuitive examples from graph theory and linear independence, I will build up to the formal definition of matroids. A key focus will be on understanding the interplay between matroids and greedy algorithms. I will present and prove a central result: the greedy algo rithm yields an optimal solution for all weight functions if and only if the underlying structure is a matroid. To illustrate this in practice, I will conclude with an application to Kruskal’s algorithm for f inding minimum spanning trees, showing how it operates as a greedy algorithm over a graphic matroid.
Online Convex Optimization
In this talk, we will explore the framework of online decision making and its powerful extension to convex optimization. An offline algorithm sees its entire input sequence up front and plans the best actions in hindsight—like plotting a complete road trip on a may before you leave. An online algorithm, by contrast, must operate “one step at a time”: at each round it receives a new input, makes an irrevocable decision without knowing the future, and then learns the outcome before moving on.
To ground this idea, we’ll examine two or three canonical examples—most notably the “experts” problem, where we allocate weight across different advisors whose daily performance we only discover after each round. Building on this intuition, we introduce the online convex optimization (OCO) setting, in which decisions live in a convex set and feedback arrives via convex loss functions, and we’ll see how the experts problem embeds naturally here.
The core of our discussion will be the Online Gradient Descent (OGD) algorithm. We’ll
first define a clear performance metric that quantifies how well any online algorithm
performs. Then we’ll present OGD’s simple update rule and show that, under this metric, its
cumulative loss grows at the same rate as the fundamental lower bound for the OCO
model—demonstrating that OGD is essentially as good as any online strategy can be.
Time permitting, we’ll conclude with a brief survey of extensions and variations—such as
adaptive step‐sizes and mirror descent—that build on the OGD template and further enrich
the online convex toolbox.
Abstract
One of the canonical worst-case NP-Hard problems is to find the maximum independent set (MIS) in a graph. One way to cope with NP-Hardness is to de vise approximation algorithms where the goal is to guarantee a solution within a small fraction of the optimal one. But MIS is notoriously hard in the sense that it does not have even a good approximation algorithm. Coloring and the independent set problem go hand in hand. One can think of coloring as ”pack ing” a graph with as few independent sets as possible. No surprise, coloring is as hard as well. In fact, even if you are given that the graph is 3-colorable, the best known coloring algorithm requires O(n^0.19747) colors. To make things worse, it is known to be NP-hard to color a 3-colorable graph using even 4 colors! In this talk, we will see a powerful technique called semidefinite programming that gives a polynomial-time approximation algorithm to a 3-colorable graph with ˜ O(n^1/4) colors.