iEdge: Intelligent Edge Computing for Data-Rich Applications

Co-located with IFIP Performance 2023

Northwestern University, Evanston, IL

November 17, 2023

PROGRAM



Title: Secure and Private Cache-aided Distributed Function Retrieval 

 


Title: Federated Multi-Objective Learning

 

Abstract: In the first part of this talk, I will give a quick introduction to the NSF AI-EDGE institute led by the Ohio State University. In the second part of this talk, I will dive into a new federated learning paradigm started by my research group called federated multi-objective learning. In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications. However, existing algorithms in MOO literature remain limited to centralized learning settings, which do not satisfy the distributed nature and data privacy needs of such multi-agent multi-task learning applications. This motivates us to propose a new federated multi-objective learning (FMOL) framework with multiple clients distributively and collaboratively solving an MOO problem while keeping their training data private. Notably, our FMOL framework allows a different set of objective functions across different clients to support a wide range of applications, which advances and generalizes the MOO formulation to the federated learning paradigm for the first time.

For this FMOL framework, we propose two new federated multi-objective optimization (FMOO) algorithms called federated multi-gradient descent averaging (FMGDA) and federated stochastic multi-gradient descent averaging (FSMGDA). Both algorithms allow local updates to significantly reduce communication costs, while achieving the same convergence rates as those of the their algorithmic counterparts in the single-objective federated learning. Our extensive experiments also corroborate the efficacy of our proposed FMOO algorithms.

 

Bio: Jia (Kevin) Liu is an Assistant Professor in the Dept. of Electrical and Computer Engineering at The Ohio State University (OSU) and an Amazon Visiting Academic (AVA). From Aug. 2017 to Aug. 2020, he was an Assistant Professor in the Dept. of Computer Science at Iowa State University (ISU). He is currently the Managing Director of the NSF AI Institute for Future Edge Networks and Distributed Intelligence (AI-EDGE) at OSU. He is also a faculty investigator of the NSF TRIPODS D4 (Dependable Data-Driven Discovery) Institute at ISU, the NSF ARA Wireless Living Lab PAWR Platform between ISU and OSU, and the Institute of Cybersecurity and Digital Trust (ICDT) at OSU. He received his Ph.D. degree from the Dept. of Electrical and Computer Engineering at Virginia Tech in 2010. His research areas include theoretical machine learning, stochastic network optimization and control, and performance analysis for data analytics infrastructure and cyber-physical systems. Dr. Liu is a senior member of IEEE and a member of ACM. He has received numerous awards at top venues, including IEEE INFOCOM'19 Best Paper Award, IEEE INFOCOM'16 Best Paper Award, IEEE INFOCOM'13 Best Paper Runner-up Award, IEEE INFOCOM'11 Best Paper Runner-up Award, and IEEE ICC'08 Best Paper Award. He has also received multiple honors of long/spotlight presentations at top machine learning conferences, including ICML, NeurIPS, and ICLR. He is an NSF CAREER Award recipient in 2020 and a winner of the Google Faculty Research Award in 2020. He received the LAS Award for Early Achievement in Research at Iowa State University in 2020, and the Bell Labs President Gold Award. Dr. Liu is an Associate Editor for IEEE Transactions on Cognitive Communications and Networking. He has served as a TPC member for numerous top conferences, including ICML, NeurIPS, ICLR, ACM SIGMETRICS, IEEE INFOCOM, and ACM MobiHoc. His research is supported by NSF, AFOSR, AFRL, ONR, Google, and Cisco.



Title: Distributed Optimization with Imperfect Information Sharing


Abstract: We investigate distributed optimization challenges involving a group of agents collaborating to tackle a separable optimization problem. Specifically, they aim to minimize the summation of many local functions. These agents are interconnected through a dynamic network, and each agent can only communicate with its immediate neighbors at any given moment. Existing approaches for solving these problems predominantly rely on single time-scale algorithms. In these algorithms, each agent executes a gradient descent with a diminishing or constant step-size, targeting the average estimate of the agents within the network. However, the exchange of precise information necessary for computing these average estimates can potentially lead to a significant communication burden. Therefore, a practical assumption is to assume that agents only receive approximations of their neighbors' information. To tackle this challenge, we introduce and explore a two time-scale decentralized algorithm that encompasses a wide range of lossy information-sharing methods, including noisy, quantized, and compressed data sharing over time-varying networks. In our approach, one time-scale focuses on mitigating the impact of imperfect incoming information from neighboring agents, while the other time-scale involves processing the gradients of local cost functions.


Joint work with Hadi Reisizadeh and Behrouz Touri


Bio: Soheil Mohajer received the B.Sc. degree in electrical engineering from the Sharif University of Technology, Tehran, Iran, in 2004, and the M.Sc. and Ph.D. degrees in communication systems from the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, in 2005 and 2010, respectively. He was affiliated as a Post-Doctoral Researcher with Princeton University from 2010 to 2011, and the University of California, Berkeley. from 2011 to 2013. He is currently a McKnight Land-Grant Associate Professor with the Department of Electrical and Computer Engineering, University of Minnesota, Twin Cities. He received the NSF CAREER Award in 2018. His research interests include information theory, distributed storage systems, distributed optimization, and statistical machine learning.



Title: Towards Video Analytics at Scale


Abstract: Today, videos captured and analyzed by computer vision models grow faster in volume than videos watched by human viewers. Unfortunately, traditional video delivery and processing systems have been geared towards human perception, but computer vision models (deep neural nets or DNNs) “perceive” video data differently—they value high inference accuracy and low inference delay. This discrepancy in performance objectives has far-reaching consequences. In this talk, I will introduce new abstractions that my collaborators and I have developed for video analytics systems to adapt to the dynamic characteristics of input video streams and the idiosyncrasies of individual DNN models. The key insight is that while eliciting real-time feedback from human users may be hard, DNN models can provide rich and instantaneous feedback that could guide efficient system adaptation. For instance, feeding an image to a DNN object detector returns not only detected objects but also free feedback, including the confidence score of these detections and intermediate features.


Bio: Junchen Jiang is an Assistant Professor of Computer Science at the University of Chicago. He received his PhD degree from CMU in 2017 and his bachelor's degree from Tsinghua in 2011. His research interests are networked systems and their intersections with machine learning. He is a recipient of a Google Faculty Research Award, NSF CAREER Award, and CMU Computer Science Doctoral Dissertation Award.



Title: Model-Distributed Random Walk Learning at Edge Networks 


Title: DIGEST: Fast and Communication Efficient  Decentralized Learning with Local Updates 



Title: Towards an Intelligent Edge Network Infrastructure


Abstract: Edge computing is a key enabler for real time IoT analytics as it significantly reduces the analytics latency by moving the computation close to the IoT devices. However, with emerging IoT applications that can generate large volumes of data per unit time, analyzing all that data at the edge and generating appropriate response in real time remains a challenging problem. We have two representative real-world IoT applications running on our testbed — real time analytics from HD cameras and sensors for controlling operations on a manufacturing pipeline. Both applications have three key characteristics — they generate large amounts of streaming data, they are closed loop systems, i.e., analysis of the generated data leads to actions, and the desired latency of the loop is very small, of the order of a few milliseconds. Analyzing large volumes of data in real time requires substantial compute resources at the edge, ranging from high-speed CPUs and GPUs to custom processing units such as FPGAs and TPUs. This not only increases the cost of deployment, but also comes with high power (and cooling) overheads. Further, load balancing the computation across the heterogeneous set of compute resources at the edge with low latency is also challenging, especially given that the resource availability is expected to change dynamically. 


In this talk I will talk about our ongoing efforts to augment existing edge computing pipeline by putting intelligence into the network to overcome the aforementioned challenges with real time IoT analytics. I will argue that by carefully offloading certain key computations from the edge servers to the network routers, there is a potential of delivering a cost and power effective edge computing infrastructure that can do real time IoT analytics over large volumes of data at ultra low latencies.


Bio: Vishal Shrivastav is an Assistant Professor in the School of Electrical and Computer Engineering at Purdue University. His research interests are broadly in computer networks and systems, and more specifically, in datacenter networks and programmable networks. Vishal holds a Ph.D. and an M.S. from Cornell University and a B.Tech. from Indian Institute of Technology, Kharagpur. His work has been recognized with a National Science Foundation CAREER Award, a Google Research Scholar Award, and a Cisco Research Award.


Title: Incentives in Federated Learning and Unlearning 



Title: WiSwarm at the Edge: Wireless Networking for Collaborative Teams of UAVs


Abstract: Emerging applications, such as autonomous vehicles and smart factories, increasingly rely on sharing time-sensitive information for monitoring and control. In such application domains, it is essential to keep information fresh, as outdated information loses its value and can lead to system failures and safety risks. The Age-of-Information (AoI) captures the freshness of the information from the perspective of the destination. In this talk, we consider a wireless network with an edge-node receiving time-sensitive information from a number of sensing-nodes through unreliable channels. We formulate a discrete-time decision problem to find a transmission scheduling policy that optimizes the AoI in the network. First, we derive a lower bound on the achievable AoI performance. Then, we develop three low-complexity scheduling policies with performance guarantees: a randomized policy, a Max-Weight policy and a Whittle’s Index policy. Leveraging our theoretical results, we propose WiSwarm: an AoI-based application layer middleware that enables the customization of WiFi networks to the needs of time-sensitive applications. To demonstrate the benefits of WiSwarm in real operating scenarios, we implement a mobility tracking application using a swarm of UAVs communicating with a central controller (i.e., the edge-node) via WiFi.


Bio: Igor Kadota is an Assistant Professor of Electrical and Computer Engineering at Northwestern University. Previously, he was a Postdoctoral Research Scientist at Columbia University. He received the Ph.D. degree from MIT LIDS and his B.Sc. degree from the Aeronautics Institute of Technology (ITA) in Brazil. His research is on modeling, analysis, optimization, and implementation of next-generation communication networks, with the emphasis on advanced wireless systems and time-sensitive applications. Igor was a recipient of several research, mentoring, teaching, and service awards, including the 2018 Best Paper Award at IEEE INFOCOM, the 2019 Best Paper Finalist at ACM MobiHoc, the 2020 MIT School of Engineering Graduate Student Extraordinary Teaching and Mentoring Award, and he was selected as a 2022 LATinE Trailblazer in Engineering Fellow by Purdue's College of Engineering. For additional information, please see: http://www.igorkadota.com