Deniz Gündüz - Imperial College London
Title: Machine Learning in the Air: Distributed Training and Inference at the Wireless Edge
Modern machine learning (ML) techniques have achieved great success in a wide variety of applications, and continue to break new ground. Bringing this success to mobile devices has tremendous potential for new services and businesses, but also poses significant technical and research challenges due not only to computational limitations of mobile devices, but also to bandwidth and power-limited noisy links that connect these devices to the network edge, and the privacy and security risks inherent in distributed processing. In this talk I will show how communication and information theoretic ideas can be employed to deploy ML algorithms at the network edge in an efficient manner, promoting a novel joint communication and learning framework.
H. Vincent Poor - Princeton University
Title: Learning at the Wireless Edge
Wireless networks can be used as platforms for machine learning, taking advantage of the fact that data is often collected at the edges of the network, and also mitigating the latency and privacy concerns that backhauling data to the cloud would entail. This talk will present an overview of some results on distributed learning at the edges of wireless networks, in which machine learning algorithms interact with the physical limitations of the wireless medium. A particular focus of the talk will be on federated learning in which end-user devices interact with edge devices such as access points to implement joint learning algorithms, and for which spectrum scheduling is a key issue. Other aspects of distributed learning will also be discussed, as time permits.
Walid Saad - Virginia Tech
Title: Two Novel Perspectives on Distributed Machine Learning
Due to major communication, privacy, and scalability challenges stemming from the emergence of large-scale Internet of Things services, machine learning is witnessing a major departure from traditional centralized cloud architectures toward a distributed machine learning (ML) paradigm where data is dispersed and processed across multiple edge devices. A prime example of this emerging distributed ML paradigm is Google's renowned federated learning framework. Despite the tremendous recent interest in distributed ML, remarkably, prior work in the area remains largely focused on the development of distributed ML algorithms for inference and classification tasks. In contrast, in this talk, we focus on two novel distributed ML perspectives. We first investigate how, when deployed over real-world wireless networks, the performance of distributed ML (particularly federated learning) will be affected by inherent network properties such as bandwidth limitations and delay. We then make the case for the necessity of a novel, joint learning and communication design perspective when deploying federated learning over practical wireless networks such as cellular systems. Then, we turn our attention towards the design of new distributed ML algorithms that can be used for generative tasks. In this context, we introduce the novel framework of brainstorming generative adversarial networks (BGANs) that constitutes one of the first implementations of distributed, multi-agent GAN models. We show how BGAN allows multiple agents to gain information from one another without sharing their real datasets but by "brainstorming" their generated data samples. We then demonstrate the higher accuracy and scalability of BGAN compared to the state of the art. We conclude this talk with an overview on the future outlook of the exciting area of distributed ML.
Presenters: Deniz Gündüz, Walid Saad, Petar Popovski, Hamed Farhadi
Moderator: Carlo Fischione
ST1: Privacy and security in distributed learning
Presenters: Jaya Prakash Champati, Raul Rondon, Richard Mugisha, Sina Molavipour
Reference List:
ST2: Error compensation and variance reduction
Presenters: Filippo Vannella, Henrik Hellström, Ming Zeng, Morteza Esmaeili Tavana
Reference List:
ST3: Model Compression in MLoN
Presenters: Lihao Guo, Wenjie Yin, Yiping Xie, Zehang Weng
Reference List:
ST4: Generalization Error
Presenters: Martin Hellkvist
Reference List:
ST5: Handling Non-IID datasets
Presenters: Ali Bemani, Oscar Bautista Gonzalez
Reference List: