Distributed/Federated Optimization and Learning
Research in this context pursues an ambitious and challenging goal: Targeting provable convergent, low-complexity, distributed solution methods for a very general class of (continuous) nonconvex (nonsmooth) programs defined over networks/graphs. Instances of such a general formulations arise in many fields of engineering, including sensor networks information processing (e.g., parameter estimation, detection, and localization) and distributed machine learning (e.g., LASSO, logistic regression, dictionary learning, matrix completion, tensor factorization, neural network training), just to name a few. Common to these problems is the necessity of performing a completely decentralized computation/optimization. For instance, when data are collected/stored in a distributed network (e.g., in clouds), sharing local information with a central processor is either unfeasible or not economical/efficient, owing to the large size of the network and volume of data, time-varying network topology, energy constraints, and/or privacy issues. Our main contribution in this context is a new convergent and distributed algorithmic framework for the aforementioned general formulation; we will term it as in-Network succEssive conveX approximaTion algorithm (NEXT). The crux of the framework is a novel convexification-decomposition technique that hinges on (primal) Successive Convex Approximation (SCA) methods, while leveraging dynamic consensus as a mechanism to distribute the computation as well as propagate the needed information over the network. The framework has then be applied to several important problems in distributed machine learning such as: (i) Neural network training, (ii) Data clustering, (iii) Tensor decomposition, (iv) Training of graph convolutional networks.
Selected papers:
C. Battiloro, P. Di Lorenzo, M. Merluzzi, and S. Barbarossa, Lyapunov-based Optimization of Edge Resources for Energy-Efficient Adaptive Federated Learning, IEEE Transactions on Green Communications and Networking, vol.7, no. 3, pp. 265-280, March 2023.
S. Scardapane, I. Spinelli, and P. Di Lorenzo, Distributed Training of Graph Convolutional Networks, IEEE Transactions on Signal and Information Processing over Networks, vol. 7, pp. 87-100, 2021.
P. Di Lorenzo, S. Barbarossa, and S. Sardellitti, Distributed Signal Processing and Optimization based on In-Network Subspace Projections, IEEE Transactions on Signal Processing, vol. 68, no. 1, pp. 2061-2076, Dec. 2020.
R. Altilio, P. Di Lorenzo, and M. Panella, Distributed Data Clustering over Networks, Pattern Recognition, vol. 93, pp. 603-620, Sept. 2019.
P. Di Lorenzo and G. Scutari, NEXT: In-Network Nonconvex Optimization, IEEE Transactions on Signal and Information Processing over Networks, vol. 2, no. 2, pp. 120-136, June 2016.
S. Scardapane, R. Fierimonte, P. Di Lorenzo, M. Panella, and A. Uncini, Distributed Semi-supervised Support Vector Machines, Neural Networks, vol. 80, pp. 43-52, Aug. 2016.
S. Scardapane and P. Di Lorenzo, A Framework for Parallel and Distributed Training of Neural Networks, Neural Networks, Vol. 91, pp. 42-54, July 2017.
S. Scardapane and P. Di Lorenzo, Stochastic Training of Neural Networks via Successive Convex Approximations, IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 10, pp. 4947-4956, Oct. 2018.