The main principle of any splitting method is divide to conquer. In image processing and machine learning applications, it is common to encounter objective functions written as the sum of two functions with different properties. Splitting methods manage these functions separately, exploiting the structure of the problem, and thus resulting in tractable algorithms.
Methods in this family include, but are not limited to, the forward-backward (also known as proximal-gradient), Douglas-Rachdord splitting methods, and even the so-called ADMM (Alternating Direction method of Multipliers). We could also include the Progressive Hedging algorithm for stochastic optimization in this category, since it can be seen as a particular case of the Douglas-Rachford method.
My research focuses on analyzing splitting methods in nonconvex settings, including convergence and saddle point avoidance analysis.
The classical setting in optimization involves convex functions. However, modern applications require better models that usually comprise nonconvex functions. Weak convexity is a harmless extension of convexity, since it allows exploiting the convex analysis machinery. My research focuses on exploring methods in weakly convex optimization, as well as variational analysis for weakly convex functions.
Consider an undirected graph, and locate an agent in each node. These agents cooperate to minimize a global cost function, by only having access to their own and their neighbors' information. The decentralized part of the setting comes from the fact that there is no central coordinator 'supervising' the overall network.
My research focuses on extending existing algorithms, in particular, analyzing distributed methods of proximal-gradient type.