This webpage introduces an efficient online algorithm GRASTA ( Grassmannian Robust Adaptive Subspace Tracking Algorithm ) for low rank subspace tracking, which is robust to both highly incomplete information and sparse corruption by outliers.
Our work is a robust counterpart of GROUSE which is very efficient for low rank subspace tracking from highly incomplete information. Though the two algorithms share the same characteristic - stochastic gradient descent on Grassmannian - GRASTA incorporates the augmented Lagrangian of l1-norm loss function into the Grassmannian optimization framework to alleviate the corruption by outliers in the subspace update at each gradient step.
As an online algorithm, GRASTA can estimate and track non-stationary subspaces when the streaming data vectors are corrupted with outliers. We apply GRASTA to the problems of robust matrix completion and real-time separation of background from foreground in video. In this second application, we show that GRASTA performs high-quality separation of moving objects from background at exceptional speeds: In one popular benchmark video example, GRASTA achieves a rate of 57 frames per second, even when run in MATLAB on a personal laptop.
We have posted our GRASTA paper at arXiv and presented our work in CVPR 2012 Providence. For more detailed information please refer to our papers. If you have some questions on our work, please email us or feel free to visit our websites. Jun He , Laura Balzano, and Arthur Szlam have been working on various extensions of GROUSE to Robust PCA since they met at IPAM.
[July 10, 2015]: I have put the C++ source code and its mex interface to GitHub https://github.com/hejunzz/grasta_mex
[Oct. 6, 2012] : We release GRASTA v.1.2.0 with the modified Alg.2 for accelerated convergence. The performance can be found in the Robust MC table in this page. The code can be downloaded HERE.
[Sep. 12, 2012] : The performance of GRASTA is greatly improved by a modified ADMM solver of Alg. 2. Update the robust matrix completion table in this webpage, and the new version will be released soon.
[June 1, 2012] : Add the real-time video separation demo used in our papers.
[June 1, 2012] : GRASTA v1.1 and later are open-source under LGPL.
[Jan. 29, 2012] : Add the pre-compiled Mex-file of Alg.2 for 32 bit Windows XP.
Unlike GROUSE solving the least square estimation (l2-norm loss function), GRASTA solves the natural l1-norm problem at each subspace update step.
Fgrasta(S; t) = min ∥UΩt w − vΩt ∥1
Critically, we use the augmented Lagrangian as the subspace loss function which is differentiable everywhere when regarding U as the variable and can be easily incorporated into the Grassmannian optimization framework.
L(U) = ∥s∗∥1 + <y∗, UΩt w∗ + s∗ − vΩt> + ρ/2∥UΩt w∗ + s∗ − vΩt ∥2
It is reasonable to see that if the observed data vector is corrupted by outliers, GRASTA is able to eliminate the corruption by the augmented Lagrangian of l1-norm minimization subproblem.
An l2-based best-fit to the subspace can be influenced arbitrarily with just one large outlier; this in turn will lead to an incorrect subspace update in the GROUSE algorithm. Figure 1 illustrates the failure of GROUSE, and success of GRASTA, when these sparse outliers are added only at periodic time intervals. We can see that GROUSE is significantly thrown off, despite the outliers occurring in an isolated vector. This illustrates clearly our motivation for adding robustness to the subspace tracking algorithm.
In order to show the performance of robust subspace tracking, we provide Figure 2. Unlike the experimental setting of Figure 1, we add 10% outliers into each generated data vector. We then examine the behavior of GRASTA when the subspace experiences a sudden dramatic change. At intervals of 5000 vectors, we randomly change the true subspace to a new random subspace. The results are in Figure 2. Again from these simulations we see that GRASTA successfully identifies the subspace change and tracks the subspace.
As an online algorithm of low-rank subspace tracking which is robust to both incomplete information and outliers, it is easy to use GRASTA to do robust matrix completion or RPCA (robust principal components analysis) with a significant speed-up over other algorithms.
The following table shows the results of completing the 1000*1000 or 2000*2000 matrices from 30% observed entries on different problem settings which were taken on a Macbook Pro laptop with 2.3GHz Intel Core i5 CPU and 4 GB RAM in Matlab environment.
Since GRASTA can successfully identify the low-rank subspace despite corruption, then separating the outliers from the corrupted data vector is easily. The following two figures show the outlier separation and data column recovery results.
Due to the online nature of GRASTA, we can easily apply GRASTA to realtime separation of foreground objects from the background in video surveillance. Imagine we had a video with only the background: When the columns of a single frame of this video are stacked into a single column, several frames together will lie in a low-dimensional subspace. In fact if the background is completely static, the subspace would be one-dimensional. That subspace can be estimated in order to identify and separate the foreground objects; if the background is dynamic, subspace tracking is necessary. GRASTA is uniquely suited for this burgeoning application.
Since the background is static, we use GRASTA first to identify the background, and then we use only Alg. 2 to separate the foreground from the background.
Here, we demonstrate that GRASTA can effectively track the right subspace in video with a dynamic background. We consider panning a ”virtual camera” from left to right and right to left through the video to simulate a dynamic background. Periodically, the virtual camera pans 20 pixels. The idea of the virtual camera is illustrated cleanly with Figure 7.
Fig.8 Real-time dynamic background tracking and foreground separation. At time t = 101, the virtual camera slightly pans to right 20 pixels. We show how GRASTA quickly adapts to the new subspace at t = 100, 105, . . . , 125. The first row is the original video frame at each time; the middle row is the tracked background at each time; the bottom row is the separated foreground at each time.
Video Demo Comparisons on Dataset "Lobby" by Different Online Algorithms
Video 1 (GRASTA). Rank=5. Separating 1546 frames with 20.0% information costs 32.27 seconds, 47.90 fps.
Video 2 (Median-Filter). Moving window = 20. Separating 1546 frames by Median-filter costs 37.17 seconds, 41.59 fps.
Video 3 (ReProCS(mod)). ReProCS(mod) on the full resolution video. Separating 1546 frames costs 4498 seconds, 0.33 fps.
The current released GRASTA package v.1.2.0 can be downloaded here, which includes the core algorithms and two sample codes for robust matrix completion and real-time video background/foreground separation.
./video_demo.m : The sample code for realtime video foreground/background separation
./bgtraining.m : The function is used by video_demo for training a rough subspace before the separation task
./bgfg_seperation_grasta.m: The demo function is used for real-time separation.
As we state in the paper, we need to solve the Least Absolute Deviations problem from the partial observed data vector for each subspace update step, the so-called Alg. 2 in the paper. In order to improve the performance, we have implemented the Mex-version by C++ using Armadillo (C++ linear algebra library). We have tested GRASTA on different problems extensively; we found that the overall performance of GRASTA was improved about 5 times by the Mex-version of Alg. 2! Currently, we only offer the pre-compiled Mex-file for 64 bit MAC OS X and 32 bit Windows XP. We encourage users to compile this Mex-file by themselves:
If compiling successfully, just modify OPTIONS.USE_MEX = 1 to use the Mex-version of Alg.2 , otherwise let OPTIONS.USE_MEX = 0 to use the Matlab implementation.
We are pleased to release GRASTACam, an interesting OPENCV demo written by Arthur Szlam and presented by Laura Balzano at CVPR 2012. The current GRASTACam is written in C with INTEL MKL library. The code can be downloaded here.
Here are the files and instructions for installing the GRASTACam demo on a MAC OS X computer. The instructions are very similar for other operating systems as well. Finally, you can replace MKL by a free/open equivalent (i.e. atlas, etc.) but you will have to alter the code and makefile.
1) Get cmake, do the command line install
2) Get or be sure you have Xcode for the compilers
3) Get Intel MKL. Copy the directory, it looks something like this: /opt/intel/composer_xe_2011_sp55.55.55
4) Get OpenCV.
5) Then use the makefile to make whatever you want to make. For example
will make a three-window version of the GRASTA demo so that you can see the background, foreground, and original camera capture.
6) Set your dynamic library variables for MKL depending on the path you copied in #3 above:
7) Now you can run the GRASTA demo! You have two options. Option 1 is to run:
This will give you three windows-- the camera capture, the background estimate, and the difference between the two. The other option is:
This gives you just the camera capture and the background estimate.
In both, you can type 'u' to increase the step size and 'd' to decrease it; you can type 'm' to increase the sampling rate and 'l' (that's small ell) to decrease the sampling rate. The new step size/sampling rate are visible in the terminal window.
The following code is taken from our Grasta package for demonstrating robust matrix completion.
This pseudo-Matlab code is to show how to incorporate GRASTA into your video application in streaming fashion. Detailed information please refer to our code package.
The source code for the GRASTA package can be distributed and/or modified under the terms of the GNU Lesser General Public License (LGPL) as published by the Free Software Foundation, either version 3 of the License or (at your option) any later version.
For other usage, please contact the authors.
 Jun He, Laura Balzano, and Arthur Szlam. Incremental gradient on the grassmannian for online foreground and background separation in subsampled video. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2012. (PDF) (Slides by Laura)