About the project
With the rapid evolution of Artificial Intelligence, distributed machine learning methods such as Federated Learning (FL) are becoming ubiquitous in present-day technology. In FL, devices train neural network models while data stays local. A central entity then aggregates the model updates into a global model. Split learning (SL) has been recently proposed as a way to enable resource-constrained devices to participate in this learning framework. In a nutshell, SL splits the model into parts, and allows clients (devices) to offload the largest part as a processing task to a computationally powerful helper (edge server, cloud, or other devices). Essentially, SL is a paradigm shift offering a more flexible version of FL that alleviates the load at the devices by better utilizing other available resources in the network. However, this method comes with optimization challenges since networking decisions need to be made in order to orchestrate the SL operations and overcome any communication overhead. Despite the increasing attention towards SL, current algorithms focus on minimizing the training time and improving on energy efficiency, without however bearing any performance guarantees. In order to make SL efficient, OPALS will fill this crucial gap by providing algorithms with provable guarantees. In particular, OPALS focuses on 3 main research axes. First, it studies the well-established problem of minimizing the training time, in search of the first algorithm with guarantees. Second, it seeks ways of leveraging SL to reduce the carbon footprint of distributed learning. Third, it investigates how SL could be employed in a decentralized setting in view of the increasing importance of swarm intelligence. OPALS will employ mathematical modelling and cutting-edge optimization methods to achieve these goals. As a result, OPALS will pave the way to better resource utilization, and thus, efficient SL exploitable for technological innovation.
Host organization: Telefónica Innovación Digital
Info on Cordis: https://cordis.europa.eu/project/id/101210495
News
December 2025: Our paper "Makespan Minimization in Split Learning: From Theory to Practice" was accepted at IEEE INFOCOM 2026. This paper studies the problem of minimizing the training time (makespan) in Split Learning and proposes a polynomial-time 5-approximation algorithm, as well as a practical heuristic that outperforms baselines. Check out the preprint here and the evaluation code here.
November 2025: Our paper on "Data Heterogeneity and Forgotten Labels in Split Federated Learning" was accepted at AAAI 2026! This work studies the phenomenon of catastrophic forgetting in Split Federated Learning and proposes a novel mitigation method that outperforms other methods in the literature. Check out the preprint here.
A high-level document describing the problem and the proposed method can be found below: