Distributed/Federated Optimization and Learning

Research in this context pursues an ambitious and challenging goal: Targeting provable convergent, low-complexity, distributed solution methods for a very general class of (continuous) nonconvex (nonsmooth) programs defined over networks/graphs. Instances of such a general formulations arise in many fields of engineering, including sensor networks information processing (e.g., parameter estimation, detection, and localization) and distributed machine learning (e.g., LASSO, logistic regression, dictionary learning, matrix completion, tensor factorization, neural network training), just to name a few. Common to these problems is the necessity of performing a completely decentralized computation/optimization. For instance, when data are collected/stored in a distributed network (e.g., in clouds), sharing local information with a central processor is either unfeasible or not economical/efficient, owing to the large size of the network and volume of data, time-varying network topology, energy constraints, and/or privacy issues. Our main contribution in this context is a new convergent and distributed algorithmic framework for the aforementioned general formulation; we will term it as in-Network succEssive conveX approximaTion algorithm (NEXT). The crux of the framework is a novel convexification-decomposition technique that hinges on (primal) Successive Convex Approximation (SCA) methods, while leveraging dynamic consensus as a mechanism to distribute the computation as well as propagate the needed information over the network. The framework has then be applied to several important problems in distributed machine learning such as: (i) Neural network training, (ii) Data clustering, (iii) Tensor decomposition, (iv) Training of graph convolutional networks.   

Selected papers: