Research & Teaching

Research

Communication-efficient Federated Learning

In federated learning, a large number of devices, like smartphones or IoT devices, collaboratively train a machine learning model while keeping the training data localized. However, this process involves frequent communication of model updates between the devices and a central server, which can be bandwidth-intensive and slow, especially with large models or limited network capacity. To enhance efficiency, communication-efficient federated learning implements strategies such as compressing model updates, reducing the frequency of updates, or selectively transmitting only significant updates. These strategies help in minimizing the amount of data that needs to be transmitted, thereby reducing the communication load on the network. This not only speeds up the learning process but also makes it more feasible in environments with limited bandwidth or where devices have restricted data plans. The end goal is to maintain or even improve the accuracy and effectiveness of the federated learning model while significantly cutting down on communication costs and overhead.

Selected related publications:

Over the Air Federated Learning

Over-the-air (OTA) Federated Learning is an innovative approach that blends the concepts of federated learning with wireless communication technologies. Traditional federated learning typically relies on conventional data transmission methods, which can be bandwidth-intensive and slow, especially when dealing with large-scale networks. OTA Federated Learning addresses this challenge by leveraging the wireless communication medium itself as a computational tool. In this setup, the simultaneous wireless transmission of data from multiple devices is exploited to perform a form of "air computation." This method cleverly utilizes the superposition property of wireless channels, where signals transmitted simultaneously by different devices add up in the air. By designing the transmitted signals appropriately, the desired aggregation operation required in federated learning (like averaging the model updates) can be directly carried out by the nature of the wireless channel during transmission. This not only significantly reduces the bandwidth requirement but also accelerates the learning process, as the aggregation happens naturally during signal transmission.

Selected related publications:

Distributionally Robust Federated Learning

Federated learning, by its nature, involves training algorithms across multiple devices with or without communication with a parameter server. Devices hold their local data samples without exchanging them. This approach safeguards privacy but introduces challenges due to data heterogeneity, i.e., different data distributions across the network. Distributionally robust federated learning steps in to enhance the robustness of the learning process against such data distribution heterogeneity. It aims to optimize the learning model not just for average-case scenarios but also for worst-case data distributions, ensuring the model's performance remains consistent and reliable across the different devices.

Selected related publications:

User-friendly Web Application for FL

Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single entity, for reasons of privacy, confidentiality or data volume but also to avoid the single point failure problem. However, solving FL problems raises other issues beyond those of centralized machine learning. These issues include setting up communication infrastructure between workers, coordinating the learning process, aggregating results from the different workers, as well as handling data heterogeneity. We aim to develop a user-friendly Web-based application that will tackle these issues and help the community run FL experiments more smoothly.

Projects

ADROIT6G - Distributed Artificial Intelligence-Driven Open & Programmable Architecture for 6G Networks 

Role: PM (University of Oulu)

Duration: 01/2023-12/2025

Objective:  To provide revolutionary research foundations for low TRL technology advancements in preparation for the upcoming 6G network architectures. 

ANDROMEDA - Advanced and novel hydrology models based on enhanced data collection, analysis, and prediction 

Role: PM (University of Oulu)

Duration: 05/2023-05/2024

Objective:  To apply ICT and ML techniques to improve the accuracy of hydrological models, optimize water management, and predict extreme events. 

Supervision

PhD students

Master students

Research Assistants/Interns

Teaching

Strategic Learning in Wireless Networks: Theory and Applications

Role: Co-instructor

Semester: Fall 2022


The course focuses on the topic of strategic learning in wireless networks, its underlying principles and applications to wireless communication. The course provides a detailed description of how distributed agents attempt to learn over time their own utilities and update their transmission strategies, under uncertainty and dynamics. 

Probability and Random Processes

Role: Teaching Assistant

Semester: Fall 2017, Spring 2018, Fall 2018


Topics include moment generating and characteristic functions, generating functions and Laplace transform, inequalities, bounds, convergence and limit theorems, random processes, linear time-invariant filtering of stationary random processes, Gaussian processes, Poisson and renewal processes, discrete-time Markov chains, continuous-time Markov chains, random walks, Wiener processes (Brownian motion), minimum mean squared error estimation, and smoothing and prediction of random processes.  

Engineering Mathematics

Role: Teaching Assistant

Semester: Fall 2016


Topics include line integral in the complex plane, two integration methods, Cauchy's integral theorem, separable ODEs, exact ODEs, non-exact ODEs with integrating factors, first-order linear ODEs, and second-order linear ODEs with constant coefficients. 

Ordinary Differential Equations and Vector Analysis

Role: Teaching Assistant

Semester: Fall 2015


Topics include Single-variable calculus, 3D functions, partial derivatives and gradient, line integrals, double integrals and Green's theorem, surface and volume integrals: Divergence and Stokes' theorems, complex numbers and functions of a complex variable, and first and second-order ordinary differential equations.  

Numerical Optimization

Role: Teaching Assistant

Semester: Spring 2015


Topics include optimization formulation, linear programming, simplex, interior point and conjugate gradient methods, gradient descent and Newton descent methods, quadratic programming, least squares minimization, and constrained/unconstrained optimization.