Research & Teaching
Research
Communication-efficient Federated Learning
In federated learning, a large number of devices, like smartphones or IoT devices, collaboratively train a machine learning model while keeping the training data localized. However, this process involves frequent communication of model updates between the devices and a central server, which can be bandwidth-intensive and slow, especially with large models or limited network capacity. To enhance efficiency, communication-efficient federated learning implements strategies such as compressing model updates, reducing the frequency of updates, or selectively transmitting only significant updates. These strategies help in minimizing the amount of data that needs to be transmitted, thereby reducing the communication load on the network. This not only speeds up the learning process but also makes it more feasible in environments with limited bandwidth or where devices have restricted data plans. The end goal is to maintain or even improve the accuracy and effectiveness of the federated learning model while significantly cutting down on communication costs and overhead.
Selected related publications:
Anis Elgabli, Chaouki Ben Issaid, Amrit Singh Bedi, Ketan Rajawat, Mehdi Bennis, and Vaneet Aggarwal. "FedNew: A communication-efficient and privacy-preserving Newton-type method for federated learning." In International Conference on Machine Learning (ICML), pp. 5861-5877. PMLR, 2022. Link
Chaouki Ben Issaid, Anis Elgabli, Jihong Park, Mehdi Bennis, and Mérouane Debbah. "Communication efficient decentralized learning over bipartite graphs." IEEE Transactions on Wireless Communications 21, no. 6, pp. 4150-4167, 2021. Link
Over the Air Federated Learning
Over-the-air (OTA) Federated Learning is an innovative approach that blends the concepts of federated learning with wireless communication technologies. Traditional federated learning typically relies on conventional data transmission methods, which can be bandwidth-intensive and slow, especially when dealing with large-scale networks. OTA Federated Learning addresses this challenge by leveraging the wireless communication medium itself as a computational tool. In this setup, the simultaneous wireless transmission of data from multiple devices is exploited to perform a form of "air computation." This method cleverly utilizes the superposition property of wireless channels, where signals transmitted simultaneously by different devices add up in the air. By designing the transmitted signals appropriately, the desired aggregation operation required in federated learning (like averaging the model updates) can be directly carried out by the nature of the wireless channel during transmission. This not only significantly reduces the bandwidth requirement but also accelerates the learning process, as the aggregation happens naturally during signal transmission.
Selected related publications:
Anis Elgabli, Jihong Park, Chaouki Ben Issaid, and Mehdi Bennis. "Harnessing wireless channels for scalable and privacy-preserving federated learning." IEEE Transactions on Communications 69, no. 8, pp. 5194-5208, 2021. Link
Mounssif Krouka, Anis Elgabli, Chaouki Ben Issaid, and Mehdi Bennis. "Communication-efficient federated learning: A second order Newton-type method with analog over-the-air aggregation." IEEE Transactions on Green Communications and Networking 6, no. 3, pp. 1862-1874, 2022. Link
Distributionally Robust Federated Learning
Federated learning, by its nature, involves training algorithms across multiple devices with or without communication with a parameter server. Devices hold their local data samples without exchanging them. This approach safeguards privacy but introduces challenges due to data heterogeneity, i.e., different data distributions across the network. Distributionally robust federated learning steps in to enhance the robustness of the learning process against such data distribution heterogeneity. It aims to optimize the learning model not just for average-case scenarios but also for worst-case data distributions, ensuring the model's performance remains consistent and reliable across the different devices.
Selected related publications:
Chaouki Ben Issaid, Anis Elgabli, and Mehdi Bennis. "DR-DSGD: A Distributionally Robust Decentralized Learning Algorithm over Graphs." Transactions on Machine Learning Research, 2022. Link
Chaouki Ben Issaid, Sumudu Samarakoon, Mehdi Bennis, and H. Vincent Poor. "Federated distributionally robust optimization for phase configuration of RISs." In 2021 IEEE Global Communications Conference (GLOBECOM), pp. 1-6. IEEE, 2021. Link
User-friendly Web Application for FL
Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single entity, for reasons of privacy, confidentiality or data volume but also to avoid the single point failure problem. However, solving FL problems raises other issues beyond those of centralized machine learning. These issues include setting up communication infrastructure between workers, coordinating the learning process, aggregating results from the different workers, as well as handling data heterogeneity. We aim to develop a user-friendly Web-based application that will tackle these issues and help the community run FL experiments more smoothly.
Projects
ADROIT6G - Distributed Artificial Intelligence-Driven Open & Programmable Architecture for 6G Networks
Role: PM (University of Oulu)
Duration: 01/2023-12/2025
Objective: To provide revolutionary research foundations for low TRL technology advancements in preparation for the upcoming 6G network architectures.
ANDROMEDA - Advanced and novel hydrology models based on enhanced data collection, analysis, and prediction
Role: PM (University of Oulu)
Duration: 05/2023-05/2024
Objective: To apply ICT and ML techniques to improve the accuracy of hydrological models, optimize water management, and predict extreme events.
Supervision
PhD students
2023-Present: Abdulmomen Ghalkha, University of Oulu, Finland.
2021-Present: Mounssif Krouka, University of Oulu, Finland.
Master students
2022-2023: Abdulmomen Ghalkha, University of Oulu, Finland (Topic: Federated Learning with Inexact Newton Direction over Graphs).
Research Assistants/Interns
07/2023-Present: Ahmed Elbakary, University of Oulu, Finland (Topic: Communication-efficient Second-Order Federated Learning).
06/2023-Present: Mohamed Badi, University of Oulu, Finland (Topic: Balancing Energy Efficiency and Distributional Robustness in Over-the-Air Federated Learning).
11/2022-11/2023: Chamith Mawela Mudiyanselage, University of Oulu, Finland (Topic: User-Friendly Web Application for Federated Learning).
Teaching
Strategic Learning in Wireless Networks: Theory and Applications
Role: Co-instructor
Semester: Fall 2022
The course focuses on the topic of strategic learning in wireless networks, its underlying principles and applications to wireless communication. The course provides a detailed description of how distributed agents attempt to learn over time their own utilities and update their transmission strategies, under uncertainty and dynamics.
Probability and Random Processes
Role: Teaching Assistant
Semester: Fall 2017, Spring 2018, Fall 2018
Topics include moment generating and characteristic functions, generating functions and Laplace transform, inequalities, bounds, convergence and limit theorems, random processes, linear time-invariant filtering of stationary random processes, Gaussian processes, Poisson and renewal processes, discrete-time Markov chains, continuous-time Markov chains, random walks, Wiener processes (Brownian motion), minimum mean squared error estimation, and smoothing and prediction of random processes.
Engineering Mathematics
Role: Teaching Assistant
Semester: Fall 2016
Topics include line integral in the complex plane, two integration methods, Cauchy's integral theorem, separable ODEs, exact ODEs, non-exact ODEs with integrating factors, first-order linear ODEs, and second-order linear ODEs with constant coefficients.
Ordinary Differential Equations and Vector Analysis
Role: Teaching Assistant
Semester: Fall 2015
Topics include Single-variable calculus, 3D functions, partial derivatives and gradient, line integrals, double integrals and Green's theorem, surface and volume integrals: Divergence and Stokes' theorems, complex numbers and functions of a complex variable, and first and second-order ordinary differential equations.
Numerical Optimization
Role: Teaching Assistant
Semester: Spring 2015
Topics include optimization formulation, linear programming, simplex, interior point and conjugate gradient methods, gradient descent and Newton descent methods, quadratic programming, least squares minimization, and constrained/unconstrained optimization.