SPLICE: Secure Predictive Low-Latency Information Centric Edge for Next Generation Wireless Networks

NSF-Intel ICN-WEN

List of Personnel

1. Panganamala R. Kumar; Texas Engineering Experiment Station; PI

2. Srinivas Shakkottai; Texas Engineering Experiment Station; Co-PI

3. I-Hong Hou; Texas Engineering Experiment Station; Co-PI

4. Ness Shroff; The Ohio State University; Co-PI

5. Atilla Eryilmaz; The Ohio State University; Co-PI

6. Patrick Crowley; Washington University in Saint Louis; Co-PI

7. Y . Charlie Hu; Purdue University; Co-PI

8. Elisa Bertino; Purdue University; Co-PI

9. Romit Roy Choudhury; University of Illinois at Urbana-Champaign; Co-PI

Goals

The goals of the proposed research will be conducted across three major inter-related thrusts:

I) Applications for an Information Centric Wireless Edge: This thrust will define the performance criteria and determine the architecture of the two main focus applications of our project: VR/AR and UAV control, and help to orient our research in terms of satisfying their information and latency needs. Key to an effective solution is to exploit multiple dimensions of commonality between instances of these applications through fast content caching and retrieval, and multicast services provided by an NDN- WEN.

II) Information Centric Networking for the Wireless Edge: This thrust will explore how to move NDN into the wireless domain, forming a layer connecting the Communication Plane to the Application Plane. Our major goals are to design the NDN-WEN architecture, how information exchange via caching would be accomplished, and how to design secure data transfer methods. A key novelty is the use of proactive secure caching that learns the value of applications and ensured they are cached ahead of demand.

III) Wireless Communication for Information Centric Networks: We aim at developing the interface between the backbone and the wireless edge capabilities, for low-overhead, low-delay, and heterogeneous service requirements. Key to this goal will be the successful solution of the question of how to employ multicasting capabilities of wireless communication while simultaneously achieving low-delay guarantees and low-complexity feedback mechanisms, via a mix of coding, caching and scheduling techniques.

Accomplishments

  1. An increasing number of applications that will be supported by next generation wireless networks require packets to arrive before a certain deadline for the system to have the desired performance. While many time-sensitive scheduling protocols have been pro- posed, few have been experimentally evaluated to establish realistic performance. Furthermore, some of these protocols involve high complexity algorithms that need to be performed on a per-packet basis. Experimental evaluation of these protocols requires a flexible platform that is readily capable of implementing and experimenting with these protocols. We present PULS, a processor-supported ultra low latency scheduling implementation for testing of downlink scheduling protocols with ultra-low latency requirements. Based on our decoupling architecture, programmability of delay sensitive scheduling protocols is done on a host machine, with low latency mechanisms being deployed on hardware. This enables flexible scheduling policies on software and high hardware function re-usability, while meeting the timing requirements of a MAC. We performed extensive tests on the platform to verify the latencies experienced for per packet scheduling, and present results that show packets can be scheduled and transmitted under 1 ms in PULS.
  2. Typical analysis of content caching algorithms using the metric of steady state hit probability under a stationary request process does not account for performance loss under a variable request arrival process. In this work, we instead conceptualize caching algorithms as complexity-limited online distribution learning algorithms, and use this vantage point to study their adaptability from two perspectives: (a) the accuracy of learning a fixed popularity distribution; and (b) the speed of learning items’ popularity. In order to attain this goal, we compute the distance between the stationary distributions of several popular algorithms with that of a genie-aided algorithm that has knowledge of the true popularity ranking, which we use as a measure of learning accuracy. We then characterize the mixing time of each algorithm, i.e., the time needed to attain the stationary distribution, which we use as a measure of learning efficiency. We merge both measures above to obtain the “learning error” representing both how quickly and how accurately an algorithm learns the optimal caching distribution, and use this to determine the trade-off between these two objectives of many popular caching algorithms.

Significant Results

  1. Using PULS, we implemented four different scheduling policies and provide detailed performance comparisons under various traffic loads and real-time requirements. We show that in certain scenarios, the optimal policy can maintain a loss ratio of less than 1% for packets with deadlines, while other protocols experience loss ratios of up to 65%.
  2. Informed by the results of our analysis, we propose a novel hybrid algorithm, Adaptive-LRU (A-LRU), that learns both faster and better the changes in the popularity. We show numerically that it also outperforms all other candidate algorithms when confronted with either a dynamically changing synthetic request process or using real world traces.

Publications

  1. "PULS: Processor-Supported Ultra-Low Latency Scheduling", Simon Yau, Ping-Chun Hsieh, Rajarshi Bhattacharyya, Kartic Bhargav K. R., Srinivas Shakkottai, I-Hong Hou, and P. R. Kumar, In ACM MobiHoc 2018 (pdf).
  2. "Accurate Learning or Fast Mixing? Dynamic Adaptability of Caching Algorithms", Jian Li, Srinivas Shakkottai, John C.S. Lui and Vijay Subramanian, To appear in IEEE Journal on Selected Areas in communications, 2018 (pdf).