Projects

Practical Coded Computation Mechanisms for Distributed Computing (NSF CAREER AWARD # 1942878)

A massive amount of data is generated with the emerging Internet of Things (IoT) including self-driving cars, drones, robots, smartphones, wireless sensors, smart meters, health monitoring devices. These data are expected to be processed in real-time in many time sensitive IoT applications, which is extremely challenging if not impossible with existing centralized cloud. For example, self-driving cars generate around 10GB of data per mile. Transmitting such massive data from end devices (such as self-driving cars) to the centralized cloud and expecting timely processing are not realistic with limited bandwidth between the end users and the centralized cloud. A distributed computing system, where computationally intensive aspects are distributively and securely processed at the end devices with possible help from edge servers (close to end-devices) and the cloud, might be a better approach to solving this problem. In this context, this award investigates practical distributed computing mechanisms that securely harvest heterogeneous resources including computing power, storage, battery, networking resources, etc., scattered across end devices, edge servers, and cloud.

This project represents a unique attempt to explore the opportunities, as well as the limitations, of the new theory of coded computation, which studies the design of erasure and error-correcting codes through data redundancy, from a practical perspective in future distributed computing systems. In a time of active discussion about the future of distributed computing systems and edge computing, this project sets out to understand how coded computation fits into this picture. The focus of the project is on (i) characterizing the cost-benefit trade-offs of coded computation for practical edge computing systems, and developing networking algorithms and protocols to make the coded computation framework adaptive to heterogeneous and dynamic nature of edge computing systems and resources, (ii) exploring coded computation for distributed learning at the edge to reduce communication cost and provide resilience, privacy and security, and (iii) developing delay-sensitive coded computation by exploiting the multiple trade-offs among latency, amount of redundancy, privacy and security for coded computation. 

Walk For Resiliency & Privacy: A Random Walk Framework for Learning at the Edge (NSF RINGS # 2148182)

Learning in Next Generation (NextG) wireless systems is expected to bring about a technological and societal revolution even bigger than that which data brought to early voice-centered systems. Learning will have to be performed on data predominantly originating at edge and user devices in order to support applications such as Internet of Things (IoT), federated learning, mobile healthcare, self-driving cars, and others. A growing body of research work has focused on engaging the edge in the learning process, which can be advantageous in terms of a better utilization of network resources, delay reduction, resiliency against cloud unavailability and catastrophic failures, and increased security and privacy. Present proposed solutions, however, predominantly suffer from having a critical centralized component, typically in the cloud, that organizes and aggregates the nodes? computations. This rigid centralized infrastructure can inhibit the full potential of resiliency and privacy in NextG systems. By relaxing the centralized infrastructure, the proposed research aims to advance Random Walk learning algorithms as the basis of a unified framework for the joint design of distributed learning and networking, with resiliency and privacy being the overarching goal.

In Random Walk learning, the model can be thought of as a ?baton? that is updated and passed from one node (cloud, edge node, end-devices, etc.) in the network to one of its neighbors that is smartly chosen. This baton can be then passed to the cloud at a prescribed schedule and/or adaptively as part of the random walk, allowing thus a fluid architecture where centralization and full decentralization constitute two corner points. The proposed work will focus on major challenges and opportunities specific to the applicability of random walk learning in NextG, namely: (i) Adaptability to the heterogeneity of the data and the heterogeneity and dynamic nature of the network; (ii) Resiliency and graceful degradation in the face of failures via coding-theoretic redundancy methods; (iii) Model distribution across nodes and random walking snakes; and (iv) Privacy of the locally owned data.

Networking and AI are two of the most transformative IT technologies  —  helping to better people’s lives, contributing to national economic competitiveness, national security, and national defense. The Institute will exploit the synergies between networking and AI to design the next generation of edge networks (6G and beyond) that are highly efficient, reliable, robust, and secure. A new distributed intelligence plane will be developed to ensure that these networks are self-healing, adaptive, and self-optimized. The future of AI is distributed because AI will increasingly be implemented across a diverse set of edge devices. These intelligent and adaptive networks will in turn unleash the power of collaboration to solve long-standing distributed AI challenges, making AI more efficient, interactive, and privacy-preserving. The Institute will develop the key underlying technologies for distributed and networked intelligence to enable a host of future transformative applications such as intelligent transportation, remote healthcare, distributed robotics, and smart aerospace. It is a national priority to educate students, professionals, and practitioners in AI and networks, and substantially grow and diversify the workforce. The Institute will develop novel, efficient, and modular ways of creating and delivering educational content and curricula at scale, and to spearhead a program that helps build a large diverse workforce in AI and networks spanning K-12 to university students and faculty.

The focus of the AI Institute will be on edge networks, which will constitute the majority of the growth of future networks. This edge includes all devices connected through the radio as well as data centers and cloud computing systems that are not at the core of the Internet. A critical component of the Institute is to shorten the time-scale of interactions between Foundations and use case research across multiple disciplines. This will result in a virtuous cycle that will have a cascading impact dramatically accelerating the time it takes from research to implementation and technology transfer. The research tasks will be enhanced and fleshed out by exploring three wireless edge use cases in depth: (1) Ubiquitous Sensing and Networking; (ii) Human-Machine Mobility and (iii) Programmable and virtualized 6G networks. These use cases are important in their own right and connect the key research thrusts and their validation to specific experimental platforms. The Institute will work with its industry and DoD partners to facilitate translation and adoption. This work has been supported by National Science Foundation (NSF) #CNS-2112471. 

Secure Distributed Coded Computations for IoT: An Information Theoretic and Network Approach (NSF SATC # 1801708)

The Internet of Things (IoT) is emerging as a new Internet paradigm connecting an exponentially increasing number of smart IoT devices and sensors. IoT applications include smart cities, transportation systems, mobile healthcare and smart grid, to name a few. Unlocking the full power of IoT requires analyzing and processing large amounts of data collected by the IoT devices through computationally intensive algorithms that are typically run in the cloud. This leaves the IoT network, and the applications it is supporting, at the complete mercy of an adversary (enemy nations, hackers, etc.), or a natural disaster (hurricane, earthquake, etc.), that can jeopardize the IoT, or completely disconnect it from its “brain” (the cloud), with potentially catastrophic consequences. This research studies Secure Coded Computations aimed at addressing the security challenges of IoT dependence on the cloud for computations by allowing data to be processed locally by IoT-devices that collaborate together to compute.

Our approach is based on the new theory of coded computations, which studies the design of erasure and error-correcting codes to improve the performance of distributed algorithms through “smart” data redundancy. This project represents the first attempt to create a unified framework to study schemes for secure coded computations that, in addition to providing reliable and secure computations on coded data, cater to the specific challenges of IoT. The focus of the proposed research is on (i) devising secure codes and algorithms with low computational complexity that are amenable to implementation on IoT devices typically characterized by low computation power, bandwidth, battery, and storage. Our approach will be based on information theoretic security that presents a lucrative alternative to high complexity homomorphic encryption methods; (ii) developing networking algorithms and protocols to make our secure coded computation framework adaptive to the heterogeneous and dynamic nature of IoT devices; (iii) validating the feasibility of the proposed secure codes and algorithms by building a mobile healthcare monitoring framework using IoT devices. 

Machine Learning Algorithms and Optimizations for Resource-Constrained Tactical Edge

Machine learning has become a prevalent tool and is of critical importance for future combat systems. In fact, Deep Neural Networks (DNNs) are being adopted by Army as part of their Artificial Intelligence (AI) initiative in mission applications to acquire multi-domain dominance in contested and congested environments. For example, tactical autonomous ground vehicles may require to compute global maps, navigation, obstacle detection and avoidance via DNN algorithms. Such applications should compute in mission time with high accuracy, which is challenging due to the complex nature of DNNs and dynamic nature of edge computing systems. In this context, our project develops machine learning algorithms and optimizations for resource-constrained tactical edge to improve speed and accuracy of AI based edge solutions. This work is supported in part by the Army Research Lab (ARL) under Grant W911NF-21-2-0272.