Research

Computation/Data Offloading in Cloud/Edge Computing System

  • Usually, smart devices have their own processor, locally. However, their processing power is very limited and sometimes, it consumes high energy and latency depending on the target computing task. Fortunately, these smart devices can borrow the computing power from edge server like home server or commercial cloud computing services. By using this opportunity, smart devices can save their processing latency or energy consumption. But, the computation offloading should be considered carefully, depending on the status mobile network and the characteristics of the computing task they want to process. In this research, I have studied which condition is better to use computation offloading or local computing. In order to decide whether it is better to user computation offloading or local computing, I have considered many kinds of factors including status of the LTE and WiFi networks, characteristics of computing tasks which is generated by different kinds of mobile applications, and resource utilization of mobile devices and cloud server, and also I have considered financial cost for using LTE network and Cloud computing service. And I have proposed an algorithm of the smart device which controls offloading policy, network interface to use and processing speed of local device. Finally, I have implemented a prototype of my algorithm on android smartphone which is assisted by MS Azure cloud computing service, and shown that there is a remarkable gain in terms of processing latency and energy consumption of the smartphone.


Content Caching in Cloud/Edge Computing System

  • In most cases, popular contents of YouTube or Netflix are cached by CDN such as Akamai or AWS Cloudfront because their origin servers are really far from the clients. By assist of content caching, smart devices can enjoy the content service from the CDN with low latency. And also, from the perspective of content service providers such as YouTube and Netflix, they can serve much more customers without scaling their infra by the assist of content caching. In addition, these kind of content also can be cached by edge servers which are even more close to the clients. The example of edge server can be a home sever or base stations equipped with storage capability. In this research, I have proposed the method, how to utilize efficiently both cloud cache and edge cache, simultaneously. It is quite complex problem, because cloud cache and edge cache have their different pros. and cons. And also, cloud cache and edge cache should be jointly considered. For example, if the content A is already cached by most of edge servers, then, for the cloud-side, it is better to avoid caching the same content on the cloud because the content A will be hit by the most of edge servers. Finally, I have shown that my hybrid caching algorithm is better than existing algorithms based on LRU or LFU in terms of content download latency.


High Speed Cloud Object Storage System

  • Most of services storing large number of data depend on cloud object storage platform at the backend-side. For example, Netflix does not have their own infra. So they use AWS S3 which is one of commercial cloud object storage service as a content store. Another example is Facebook which is one of popular SNS service. Facebook has their own infrastructure and they have implemented private object storage system which is called F4 to store their photos and videos. In simplicity, cloud object storage is a concept of distributed file system across thousands of virtual machines. And also, it supports high performance to the multiple customers such as high availability, durability, throughput and low latency. From the perspective of functionalities, it supports security, lifecycle, event trigger and multiple types of storage classes with different service level and price. In this research, I am designing a system architecture for object storage on cloud infrastructure in order to enhance the throughput, availability and durability.


Processing/Network/Storage Resource Orchestration System

  • In the previous topics, I have shown the several services such as traditional network service, computation offloading service and content caching service. In addition, there is one more promising service that is called network function virtualization. In the past, these kinds of services have been implemented on the specific hardware. But, nowadays, these can be softwarized based on virtualization technique. In other words, we can implement these kind of services on the commercial machines based on VM or container. My research goal is to unify the different kind of services into one system framework. In detail, I will model the heterogeneous infrastructure as a big resource pool which is composed of many devices and machines containing network, computing and storage resource. And also, each kinds of different service can be modeled by a subset of the resource pool. Then, I want to optimize the multi-resource orchestration to decide which part of resource pool should be clustered to support the incoming service request. To solve this problem, I’d like to apply reinforcement learning to improve the quality of experience. And finally, I’m planning to implement a prototype of my system framework based on Kubernetes which is a container orchestration system. Now, Kubernetes manages multiple machines as resource pool, so, I’m planning to customize Kubernetes scheduler to import my system framework.