Research Statement:

In the past decade, we have witnessed the power of convex relaxations. Many nonconvex or NP-hard problems, i.e. sparse recovery, matrix completion and phase retrieval, which were formidable in the past, are proven to be solvable in polynomial time, (most of them) without sacrificing too much of sampling complexity, by either relaxing the objective function or lifting the problem into a higher dimension. The theories are beautiful and well developed, especially by Candes, Tao and Tropp, however, I feel this line of research is coming an end: 
(i) Many of convex relaxations are bottlenecked by the limit of computational complexity. For instance, for the famous collaborative filtering problem, it is intractable to minimize the nuclear norm of a huge matrix by doing full SVDs thousands of times.
(ii) Some problems, are either not seemingly approachable by convex relaxation due to intrinsic symmetries (i.e. , dictionary learning and blind deconvolution), or suffering from a substantial sample complexity gap (i.e., sparse PCA, simultaneously low rank and sparse problems).

However, many heuristic nonconvex algorithms, such as the simple alternating minimization method, are computationally efficient, scalable to huge datasets, and empirically works well. There are a few endeavors along this line of research, that provide theoretical guarantees for solving the nonconvex problem to its global optimal. However, the current theories are far from mature:
(i) The assumptions are usually ideal or too stringent for practical purposes;
(ii) The analysis is usually case by case, and requires smart initialization which usually depends on model assumptions.

Currently, I am studying geometric approaches and using tools from manifold optimizations to attack problems of this kind. Below summarizes recent research efforts in this line that I found interesting (shamelessly stolen from Ju Sun's website):

  • Matrix Completion 
  1. P. Jain, P. Netrapalli, Fast Exact Matrix Completion with Finite Samples. [Link]
  2. P. Netrapalli, U. N. Niranjan, S. Sanghavi, A. Anandkumar, P. Jain, Non-convex Robust PCA. [Link]
  3. M. Hardt, M. Wootters, Fast Matrix Completion without the Condition Number. [Link]
  4. M. Hardt, Understanding Alternating Minimization for Matrix Completion. [Link]
  5. P. Jain, P. Netrapalli, S. Sanghavi, Low-rank Matrix Completion using Alternating Minimization. [Link]
  • Phase Retrieval
  1. E. Candes, X. Li, M. Soltanolkotabi, Phase Retrieval via Wirtinger Flow: Theory and Algorithms. [Link]
  2. P. Netrapalli, P. Jain, S. Sanghavi, Phase Retrieval using Alternating Minimization. [Link]
  • Dictionary Learning
  1. S. Arora, A. Bhaskara, R. Ge, T. Ma, More Algorithms for Provable Dictionary Learning. [Link]
  2. A. Agarwal, A. Anandkumar, P. Netrapalli, A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries. [Link]
  3. S. Arora, R. Ge, A. Moitra, New Algorithms for Learning Incoherent and Overcomplete Dictionaries. [Link]
  4. A. Agarwal, A. Anandkumar, P. Jain, P. Netrapalli, Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization. [Link]
  5. S. Arora, R. Ge, T. Ma, A. Moitra, Simple, Efficient, and Neural Algorithms for Sparse Coding. [Link]
  • Attacking Saddle Point Problems
  1. R. Ge, F. Huang, C. Jin, Y. Yuan, Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition. [Link]
  • Simultaneously Sparse and Low Rank Problem
  1. K. Lee, Y. Wu, Y. Bresler, Near Optimal Compressed Sensing of Sparse Rank-One Matrices via Sparse Power Factorization. [Link]
  • Burer-Monteiro Style Decomposition Algorithms
  1. C. D. Sa, K. Olukotun, C. Ré, Global Convergence of Stochastic Gradient Descent for Some Nonconvex Matrix Problems. [Link]