CSC 466 Project
TCP congestion control is crucial for maintaining Internet stability and performance. The key problem it handles is how to regulate each sender’s rate to avoid congestive collapse while efficiently utilizing network capacity and sharing it fairly among users.
Over the years, many congestion control algorithms (CCAs) have been developed to address this problem under evolving network conditions. Classic Reno algorithm established the AIMD (Additive-Increase Multiplicative-Decrease) approach, which proved effective in the early Internet. However, as link speeds and propagation delays grew, Reno struggled to fully utilize capacity on fast long-distance links. Loss-based CCAs like Reno keep increasing until packet loss indicates a full buffer, which means they inherently create queues and latency. Newer loss-based CCAs such as CUBIC introduced more aggressive growth strategies to perform better on high-bandwidth or high-RTT paths. Most recently, model-based CCAs like Google’s BBR and its successor BBRv2 take a very different approach, aiming to deliver high throughput and low latency by controlling congestion based on measured bandwidth and delay instead of packet loss. Each of these algorithms has strengths and limitations, especially when network conditions vary.
In this project, we’re going to delve into CUBIC, BBRv2 and BBRv3 comparing their performance in terms of throughput, latency, fairness, and compatibility. We review existing studies to highlight how each behaves under diverse conditions, identify gaps or weaknesses in today’s solutions, and discuss how BBRv2’s parameters might be tuned for better fairness without undue performance loss. We will also compare our approach with BBRv3 to see if we can achieve similar or even better results.
Weeks 1–2 (Feb 7 – Feb 21): Setup and Paper Review
Paper & Theory: In this week we will deeply review research papers, articles, and talks on TCP congestion control. Focus on CUBIC, BBRv2 and BBRv3. Summarize strengths/weaknesses and identify key parameters mentioned for tuning.
Environment Setup: We will also be setting up the test environment on my local machine, we will use Linux VMs as our basic testbed and use “tc” to simulate the network conditions. We will also gain access to one of the Cloud VM providers to test our solution on public Internet.
Preliminary Tests: Conduct a few simple tests (e.g. CUBIC vs BBRv2 with slight packet loss) to reproduce known behavior and confirm our testbed works as expected.
Weeks 3–4 (Feb 21 – Mar 7): Experiments and Data Collection
Execute Baseline Experiments: Conduct the planned comparative experiments for CUBIC, BBRv2 and BBRv3 under various conditions with default settings as baseline. This includes:
- Single-flow throughput/latency tests for each algorithm
- Two-flow competition tests
- Different RTT combinations
- Random loss scenario
- Different buffer size
BBRv2 Tuning Experiments: Based on initial results (which likely confirm fairness problems in default BBRv2, etc.), start testing the tuning strategies: adjust one parameter at a time and measure impact. For example, run a 1 BBRv2 vs 1 CUBIC test with loss_thresh=2%, then 1%, etc.
Public Internet Test: In parallel, run a couple of long iperf tests over the real Internet to see if behaviors align with testbed.
Weeks 5–6 (Mar 7 – Mar 21): Analysis, Refinement, and Presentation Prep
Data Analysis: Dive into in-depth analysis of all experiment data. Compute all metrics for each scenario and generate comparison tables/graphs.
Draw Conclusions: Synthesize the findings. Which algorithm is best under which condition? Where does each fail? Identify if our parameter tuning achieved the goal
Real-world Implementation Insights: Reflect on practical aspects of implementing these tunings.
Finalize Presentation: Start creating the final presentation. This will include background (key problem, algorithm overviews), our methodology, and our findings with supporting graphs.