Minimization Problem

The minimization of the decomposition into low-rank plus additive matrices in the case of K=2 can be formulated as follows:

This minimization problem can be NP-hard, and convex or not following the constraints and the loss functions used. Practically, when the problem is NP- hard and/or not convex, the constraints are relaxed and the loss functions are changed to obtain a tractable and convex problem. For example, the original formulation in RPCA used the rank(.) and the l0-norm as loss functions for L and S, respectively. As this problem is NP-hard, this formulation is relaxed with the nuclear norm and the l1-norm. To minimize confusion, the models that minimize rank functions and nuclear norms are named the original model and the relaxed model, respectively. Thus, the corresponding minimization problem can be formulated with norms to be convex and solvable as follows:

Moreover, the minimization problem can be written in its Lagrangian form as follows:

Fair Use Policy

As this website gives many information that come from my research, please cite my following survey papers:

T. Bouwmans, A. Sobral, S. Javed, S. Jung, E. Zahzah, "Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset", Computer Science Review, Volume 23, pages 1-71, February 2017. [pdf]

T. Bouwmans, E. Zahzah, “Robust PCA via Principal Component Pursuit: A Review for a Comparative Evaluation in Video Surveillance”, Special Issue on Background Models Challenge, Computer Vision and Image Understanding, CVIU 2014, Volume 122, pages 22–34, May 2014. [pdf]

Note: My publications are available on Academia, ResearchGate, Researchr, ORCID and PublicationList.