The high-performance computing centers, such as – XSEDE or industrial-scale data centers all around the world, are connected with hundreds of Gbps WAN, need proper optimization to utilize the bandwidth. Many ML-based big data analytics running on those distributed systems should be able to intelligently schedule and allocate end system and network resources to achieve high utilization and meet the requirements. I am interested in exploring challenges in Software-Defined WAN systems and expanding the work into the Internet-scale in the domain of edge, mobile, and IoT networks. I believe that scalable data mining and ML-based solutions can efficiently solve such challenges. However, such multi-domain systems have privacy concerns, and sharing private log information could be a huge barrier. A natural solution is to explore federated learning approaches that can process historical data locally in end system edges, and share processed knowledge to mitigate privacy and scalability issues. It is a new area of machine learning with great possibilities that I am keen to explore.
We have shown many promising results to improve the performance of many different systems - Cloud system, High performance computing, mobile devices, virtualized network functions. We are mainly interested in Deep reinforcement learning to control real -time performance of different systems.
Network Function Virtualization (NFV) platforms consume significant energy, introducing high operational costs in edge and data centers. In this work we presents a novel framework called GreenNFV that optimizes resource usage for network function chains using deep reinforcement learning. GreenNFV optimizes resource parameters such as CPU sharing ratio, CPU frequency scaling, last-level cache (LLC) allocation, DMA buffer size, and packet batch size. GreenNFV learns the resource scheduling model from the benchmark experiments and takes Service Level Agreements (SLAs) into account to optimize resource usage models based on the different throughput and energy consumption requirements. Our evaluation shows that GreenNFV models achieve high transfer throughput and low energy consumption while satisfying various SLA constraints.
Edge computing is an emerging paradigm of networking that processes data locally rather than overwhelming a central server/cloud system with streams of data and only sends relevant, useful data through the network. Even though it reduces the bandwidth requirements significantly, I believe, there is still a need for a balance between local processing, relevant data production, and cloud processing. I am also interested in areas such as - mobility management of the edge services when mobile devices roam around different network partitions, IoT device management, and service management.
Even though the technology behind Augmented Reality (AR), Virtual Reality (VR), and metaverse are still in their inception, it is expected to introduce a massive network load as technology becomes more mature. The data generated from 3D streaming, and other many types of Real-time interactions including - human-to-human, human-to-machine, and machine-to-machine can have many complex combinations of requirements (e.g., motion-to-photon latency, interaction latency, perception latency, high reliability, stable data rate, connection density). Moreover, they need to ensure seamless synchronization services between the physical world and the virtual world. To make the metaverse more ubiquitous, it is expected that the metaverse should be accessible through edge devices via the radio access network. It is necessary to design next-generation edge networks to support mobile users to connect to the metaverse. I am curious to explore these interesting challenges.
Currently, an autonomous vehicle can generate 4TB of data daily and the On-Board Unit (OBU) performs all the data processing generated from the sensors. OBU consumes a lot of energy which leads to a shorter battery and OBU lifetime. A natural solution is to offload data processing into the cloud or nearest edge system. However, such a solution comes with its own challenges. Firstly, data processing is highly time critical as OBU makes real-time decisions. Secondly, the network connectivity between the vehicle and the cloud is also challenging for the motion and high velocity. Multiple connectivity (e.g., Wi-Fi, cellular) might help to offload data faster, however, data - traveling through multiple channels - might have different transfer rates. We are highly interested to explore challenges in connected vehicles.
Due to the advent of billions of mobile devices equipped with high processing capacities (e.g., AI processors), the paradigm has shifted towards the federated learning scheme. Centralized machine learning tasks that consume high energy and compute resources can be distributed towards mobile devices with edge network connectivity. Mobile devices can use local resources and also offload massive analytics to the edge servers. It can reduce the latency imposed by massive data transfer in the central cloud servers. I am highly enthusiastic about exploring the challenges related to network and resource optimization for distributed and federated learning.