Research Statement:In the past decade, we have witnessed the power of convex relaxations. Many nonconvex or NPhard problems, i.e. sparse recovery, matrix completion and phase retrieval, which were formidable in the past, are proven to be solvable in polynomial time, (most of them) without sacrificing too much of sampling complexity, by either relaxing the objective function or lifting the problem into a higher dimension. The theories are beautiful and well developed, especially by Candes, Tao and Tropp, however, I feel this line of research is coming an end: (i) Many of convex relaxations are bottlenecked by the limit of computational complexity. For instance, for the famous collaborative filtering problem, it is intractable to minimize the nuclear norm of a huge matrix by doing full SVDs thousands of times. (ii) Some problems, are either not seemingly approachable by convex relaxation due to intrinsic symmetries (i.e. , dictionary learning and blind deconvolution), or suffering from a substantial sample complexity gap (i.e., sparse PCA, simultaneously low rank and sparse problems). However, many heuristic nonconvex algorithms, such as the simple alternating minimization method, are computationally efficient, scalable to huge datasets, and empirically works well. There are a few endeavors along this line of research, that provide theoretical guarantees for solving the nonconvex problem to its global optimal. However, the current theories are far from mature: (i) The assumptions are usually ideal or too stringent for practical purposes; (ii) The analysis is usually case by case, and requires smart initialization which usually depends on model assumptions. Currently, I am studying geometric approaches and using tools from manifold optimizations to attack problems of this kind. Below summarizes recent research efforts in this line that I found interesting (shamelessly stolen from Ju Sun's website):
