Shinik Park, Sanghyun Han, Junseon Kim, Jongyun Lee, Sangtae Ha, and Kyunghan Lee* [Paper]
Abstract: Network fluctuations can cause unpredictable degradation of the user’s quality of experience (QoE) on real-time video streaming. The intrinsic property of real-time video streaming, which generates delay-sensitive and chunk-based video frames, makes the situation even more complicated. Although previous approaches have tried to alleviate this problem by controlling the video bitrate based on the current network capacity estimate, they do not take into account the explicit queueing delay experienced by the video frame in determining the bitrate of upcoming video frames. To tackle this problem, we propose a new real-time video streaming system, Exstream, that can adapt to dynamic network conditions with the help of video bitrate control method and bandwidth estimation method designed to support real-time video streaming environments. Exstream explicitly estimates the queueing delay experienced by the video frame based on the transmission time budget that each frame can maximally utilize, which depends on the frame generation interval, and adjusts the bitrate of newly generated video frames to suppress the queueing delay level close to zero. Our comprehensive experiments demonstrate that Exstream achieves lower frame delay than four existing systems, Salsify, WebRTC, Skype, and Hangouts without frequent video frame skip.
Hyoyoung Lim, Jinsung Lee, Jongyun Lee, Sandesh Dhawaskar Sathyanarayana, Junseon Kim, Anh Nguyen, Kwang Taik Kim, Youngbin Im, Mung Chiang, Dirk Grunwald, Kyunghan Lee, Sangtae Ha* [Paper]
Abstract: In this paper, we conduct a measurement study on operational 5G networks deployed across different frequency bands (mmWave and sub-6GHz) and server locations (mobile edge and Internet cloud). Specifically, we assess 5G performance in both uplink and downlink across multiple operators’ networks. We then carry out extensive comparisons of transport-layer protocols using ten different algorithms in full-fledged 5G networks, including an edge computing environment. Finally, we evaluate representative mobile applications over the 5G network with and without edge servers. Our comprehensive measurements provide several insights that affect the experience of 5G users: (i) With a 5G edge server, existing TCP congestion control algorithms can achieve throughput up to 1.8 Gbps with only a single flow. (ii) The maximum TCP receive buffer size, which is set by off-the-shelf 5G phones, can limit the throughput performance of 5G networks, which is not observed in 4G LTE-A networks. (iii) Despite significant latency gains in download-centric applications, the 5G edge service provides limited benefits to CPU-intensive tasks or those that use significant uplink bandwidth. To our knowledge, this is the first measurement-driven understanding of 5G edge computing “in the wild,” which can provide an answer to how edge computing would perform in real 5G networks.
Junseon Kim, Byonghyo Shim, and Kyunghan Lee* [Paper]
Abstract: A blueprint for ultra-low latency in 5G cellular networks is designed to enable ultra-reliable low-latency communication services that require fast delivery of small data units (e.g., packets or frames). However, futuristic applications that are envisioned to be time-critical demand much more than what this blueprint can handle because their data units are typically very large. As the data size increases, the latency is greatly affected by how much data the network can transmit per unit time (i.e., bandwidth). This impact of bandwidth on latency brings the need to guarantee the bandwidth required to keep the latency within the desired time. In 5G, network slicing is introduced to guarantee performance, but how to handle the inherent nature of changing data unit size and radio channel quality over time remains an open question. In this article, for a detailed understanding, we first discuss end-to-end latency at the application level from the perspective of first-byte delay and transmission delay for the remaining bytes. We then investigate how recent techniques relate to end-to-end latency reduction, and present challenging issues caused by their inherent natures. To handle the issues, we propose a new design for next-generation (6G) cellular networks to provide performance guarantees, and discuss open issues in our network design.
Abstract: Unexpected large packet delays are often observed in cellular networks due to huge network queuing caused by excessive traffic coming into the network. To deal with the large queue problem, many congestion control algorithms try to find out how much traffic the network can accommodate, either by measuring network performance or by directly providing explicit information. However, due to the nature of the control in which queue growth should be observed or the necessity to modify the overall network architecture, existing algorithms are experiencing difficulties in keeping queues within a strict bound. In this paper, we propose a novel congestion control algorithm based on simple feedback, ECLAT which can provide bounded queuing delay using only one-bit signaling already available in traditional network architecture. To do so, a base station or a router running ECLAT 1) calculates how many packets each flow should transmit and 2) analyzes when congestion feedback needs to be forwarded to adjust the flow’s packet transmission to the desired rate. Our extensive experiments in our testbed demonstrate that ECLAT achieves strict queuing delay bounds, even in the dynamic cellular network environment.
Shinik Park, Jinsung Lee, Junseon Kim, Jihoon Lee, Sangtae Ha, Kyunghan Lee* [Paper]
Abstract: Since the diagnosis of severe bufferbloat in mobile cellular networks, a number of low-latency congestion control algorithms have been proposed. However, due to the need for continuous bandwidth probing in dynamic cellular channels, existing mechanisms are designed to cyclically overload the network. As a result, it is inevitable that their latency deviates from the smallest possible level (i.e., minimum RTT). To tackle this problem, we propose a new low-latency congestion control, ExLL, which can adapt to dynamic cellular channels without overloading the network. To do so, we develop two novel techniques that run on the cellular receiver: 1) cellular bandwidth inference from the downlink packet reception pattern and 2) minimum RTT calibration from the inference on the uplink scheduling interval. Furthermore, we incorporate the control framework of FAST into ExLL’s cellular specific inference techniques. Hence, ExLL can precisely control its congestion window to not overload the network unnecessarily. Our implementation of ExLL on Android smartphones demonstrates that ExLL reduces latency much closer to the minimum RTT compared to other low-latency congestion control algorithms in both static and dynamic channels of LTE networks.