GRASTA

This webpage introduces an efficient online algorithm GRASTA ( Grassmannian Robust Adaptive Subspace Tracking Algorithm ) for low rank subspace tracking, which is robust to both highly incomplete information and sparse corruption by outliers.

We have posted our GRASTA paper at arXiv and presented our work in CVPR 2012 Providence. For more detailed information please refer to our papers. If you have some questions on our work, please email us or feel free to visit our websites. Jun He , Laura Balzano, and Arthur Szlam have been working on various extensions of GROUSE to Robust PCA since they met at IPAM.

Update

[July 10, 2015]: I have put the C++ source code and its mex interface to GitHub https://github.com/hejunzz/grasta_mex

[Nov. 21, 2012] : We release GRASTACam, an interesting OPENCV demo with C implementation of GRASTA, written by Arthur Szlam. The code can be downloaded HERE. Please refer to GRASTACam section for detailed information.

[Oct. 6, 2012] : We release GRASTA v.1.2.0 with the modified Alg.2 for accelerated convergence. The performance can be found in the Robust MC table in this page. The code can be downloaded HERE.

[Sep. 12, 2012] : The performance of GRASTA is greatly improved by a modified ADMM solver of Alg. 2. Update the robust matrix completion table in this webpage, and the new version will be released soon.

[June 1, 2012] : Add the real-time video separation demo used in our papers.

[June 1, 2012] : GRASTA v1.1 and later are open-source under LGPL.

[Jan. 29, 2012] : Add the pre-compiled Mex-file of Alg.2 for 32 bit Windows XP.

The Key Point

Unlike GROUSE solving the least square estimation (l2-norm loss function), GRASTA solves the natural l1-norm problem at each subspace update step.

Fgrasta(S; t) = min ∥UΩt w − vΩt1

Critically, we use the augmented Lagrangian as the subspace loss function which is differentiable everywhere when regarding U as the variable and can be easily incorporated into the Grassmannian optimization framework.

L(U) = ∥s1 + <y, UΩt w + s − vΩt> + ρ/2∥UΩt w + s − vΩt2

It is reasonable to see that if the observed data vector is corrupted by outliers, GRASTA is able to eliminate the corruption by the augmented Lagrangian of l1-norm minimization subproblem.

An l2-based best-fit to the subspace can be influenced arbitrarily with just one large outlier; this in turn will lead to an incorrect subspace update in the GROUSE algorithm. Figure 1 illustrates the failure of GROUSE, and success of GRASTA, when these sparse outliers are added only at periodic time intervals. We can see that GROUSE is significantly thrown off, despite the outliers occurring in an isolated vector. This illustrates clearly our motivation for adding robustness to the subspace tracking algorithm.

In order to show the performance of robust subspace tracking, we provide Figure 2. Unlike the experimental setting of Figure 1, we add 10% outliers into each generated data vector. We then examine the behavior of GRASTA when the subspace experiences a sudden dramatic change. At intervals of 5000 vectors, we randomly change the true subspace to a new random subspace. The results are in Figure 2. Again from these simulations we see that GRASTA successfully identifies the subspace change and tracks the subspace.

As an online algorithm, GRASTA can estimate and track non-stationary subspaces when the streaming data vectors are corrupted with outliers. We apply GRASTA to the problems of robust matrix completion and real-time separation of background from foreground in video. In this second application, we show that GRASTA performs high-quality separation of moving objects from background at exceptional speeds: In one popular benchmark video example, GRASTA achieves a rate of 57 frames per second, even when run in MATLAB on a personal laptop.

Our work is a robust counterpart of GROUSE which is very efficient for low rank subspace tracking from highly incomplete information. Though the two algorithms share the same characteristic - stochastic gradient descent on Grassmannian - GRASTA incorporates the augmented Lagrangian of l1-norm loss function into the Grassmannian optimization framework to alleviate the corruption by outliers in the subspace update at each gradient step.

Fig.1 Robust subspace tracking comparison between GRASTA and GROUSE

Fig.2 The performance of robust subspace adaption within 10% outliers. At time 5000, 10000, 15000, and 20000, the subspace undergoes a sudden change.

Applications

Robust Matrix Completion

As an online algorithm of low-rank subspace tracking which is robust to both incomplete information and outliers, it is easy to use GRASTA to do robust matrix completion or RPCA (robust principal components analysis) with a significant speed-up over other algorithms.

The following table shows the results of completing the 1000*1000 or 2000*2000 matrices from 30% observed entries on different problem settings which were taken on a Macbook Pro laptop with 2.3GHz Intel Core i5 CPU and 4 GB RAM in Matlab environment.

Since GRASTA can successfully identify the low-rank subspace despite corruption, then separating the outliers from the corrupted data vector is easily. The following two figures show the outlier separation and data column recovery results.

Fig.3 The recovered data column

Fig.4 The separated outliers

Real-time Video Background/Foreground Separation

Due to the online nature of GRASTA, we can easily apply GRASTA to realtime separation of foreground objects from the background in video surveillance. Imagine we had a video with only the background: When the columns of a single frame of this video are stacked into a single column, several frames together will lie in a low-dimensional subspace. In fact if the background is completely static, the subspace would be one-dimensional. That subspace can be estimated in order to identify and separate the foreground objects; if the background is dynamic, subspace tracking is necessary. GRASTA is uniquely suited for this burgeoning application.

Static Background

Since the background is static, we use GRASTA first to identify the background, and then we use only Alg. 2 to separate the foreground from the background.

Fig.5 "Airport". Resolution 144*76, total frames 3584, FPS 57.3.

Fig.6 "Shopping Mall". Resolution 320*256, total frames 1286, FPS 32.9.

Dynamic Background

Here, we demonstrate that GRASTA can effectively track the right subspace in video with a dynamic background. We consider panning a ”virtual camera” from left to right and right to left through the video to simulate a dynamic background. Periodically, the virtual camera pans 20 pixels. The idea of the virtual camera is illustrated cleanly with Figure 7.

Fig.7

Fig.8 Real-time dynamic background tracking and foreground separation. At time t = 101, the virtual camera slightly pans to right 20 pixels. We show how GRASTA quickly adapts to the new subspace at t = 100, 105, . . . , 125. The first row is the original video frame at each time; the middle row is the tracked background at each time; the bottom row is the separated foreground at each time.

Video Demo Comparisons on Dataset "Lobby" by Different Online Algorithms

Video 1 (GRASTA). Rank=5. Separating 1546 frames with 20.0% information costs 32.27 seconds, 47.90 fps.

Video 2 (Median-Filter). Moving window = 20. Separating 1546 frames by Median-filter costs 37.17 seconds, 41.59 fps.

Video 3 (ReProCS(mod)). ReProCS(mod) on the full resolution video. Separating 1546 frames costs 4498 seconds, 0.33 fps.

GRASTA Code Package

Package Files

The current released GRASTA package v.1.2.0 can be downloaded here, which includes the core algorithms and two sample codes for robust matrix completion and real-time video background/foreground separation.

./grasta_stream.m : The core algorithm of GRASTA.

./grasta_mc.m : The sample function of robust matrix completion to show how to incorporate grasta_stream into your own code.

./grasta_RobustMC_demo.m : The sample code for robust matrix completion.

./video_demo.m : The sample code for realtime video foreground/background separation

./bgtraining.m : The function is used by video_demo for training a rough subspace before the separation task

./bgfg_seperation_grasta.m: The demo function is used for real-time separation.

./admm_srp.m : The Matlab version of Alg.2 (ADMM solver).

./grasta_path.m : The auxiliary function to add GRASTA directories into Matlab search path.

./util/*.* : the auxiliary functions for the demo.

./make_video/*.* : Some Matlab scripts that used in our CVPR paper for making video demo.

./mex/mex_srp.mexmaci64 : The Mex-version of Alg.2 for 64 bit Mac OS X (ADMM solver).

./mex/mex_srp.mexw32 : The Mex-version of Alg.2 for 32 bit Windows XP (ADMM solver).

./mex/mex_srp.cpp : The source code of the Mex-version of Alg.2.

./mex/lapack-win32/ : The pre-compiled LAPACK and BLAS libraries for 32 bit Windows XP which are distributed under a BSD license. (Downloaded from http://www.fi.muni.cz/~xsvobod2/misc/lapack/)

How to Compile the Mex-version of Alg. 2

As we state in the paper, we need to solve the Least Absolute Deviations problem from the partial observed data vector for each subspace update step, the so-called Alg. 2 in the paper. In order to improve the performance, we have implemented the Mex-version by C++ using Armadillo (C++ linear algebra library). We have tested GRASTA on different problems extensively; we found that the overall performance of GRASTA was improved about 5 times by the Mex-version of Alg. 2! Currently, we only offer the pre-compiled Mex-file for 64 bit MAC OS X and 32 bit Windows XP. We encourage users to compile this Mex-file by themselves:

    1. You should first download Armadillo ( http://arma.sourceforge.net/download.html ) and unzip it in a proper directory.

    2. Uncomment the ARMA_USE_LAPACK and ARMA_USE_BLAS flags in /inlcude/armadillo_bits/config.hpp.

    3. Install the proper LAPACK and BLAS libraries.

    4. Then do the following Mex command at Matlab environment with -O to let the compiler optimize the code.

mex -O -Iarmadillo-2.4.2/include -llapack -lblas mex_srp.cpp

If compiling successfully, just modify OPTIONS.USE_MEX = 1 to use the Mex-version of Alg.2 , otherwise let OPTIONS.USE_MEX = 0 to use the Matlab implementation.

GRASTA in OPENCV

We are pleased to release GRASTACam, an interesting OPENCV demo written by Arthur Szlam and presented by Laura Balzano at CVPR 2012. The current GRASTACam is written in C with INTEL MKL library. The code can be downloaded here.

Instructions for Installing GRASTACam

Here are the files and instructions for installing the GRASTACam demo on a MAC OS X computer. The instructions are very similar for other operating systems as well. Finally, you can replace MKL by a free/open equivalent (i.e. atlas, etc.) but you will have to alter the code and makefile.

1) Get cmake, do the command line install

2) Get or be sure you have Xcode for the compilers

3) Get Intel MKL. Copy the directory, it looks something like this: /opt/intel/composer_xe_2011_sp55.55.55

4) Get OpenCV.

To make opencv:

Take the downloaded OpenCV tar file and put it in a folder of your choice. At a command line, go into that folder. Then execute these commands one at a time:

mkdir build

cd build

cmake -G "Unix Makefiles" ..

make -j8

sudo make install

This last command installs opencv at /usr/local/lib and /usr/local/include.

5) Then use the makefile to make whatever you want to make. For example

make grastacam

will make a three-window version of the GRASTA demo so that you can see the background, foreground, and original camera capture.

6) Set your dynamic library variables for MKL depending on the path you copied in #3 above:

export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/opt/intel/composer_xe_2011_sp55.55.55/compiler/lib/

export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/opt/intel/composer_xe_2011_sp55.55.55/mkl/lib/

7) Now you can run the GRASTA demo! You have two options. Option 1 is to run:

./grastacam.o

This will give you three windows-- the camera capture, the background estimate, and the difference between the two. The other option is:

./grastacam2window.o

This gives you just the camera capture and the background estimate.

In both, you can type 'u' to increase the step size and 'd' to decrease it; you can type 'm' to increase the sampling rate and 'l' (that's small ell) to decrease the sampling rate. The new step size/sampling rate are visible in the terminal window.

Demo code

Robust matrix completion

The following code is taken from our Grasta package for demonstrating robust matrix completion.

%% First we generate the data matrix and incomplete sample vector.

outlierFac = 0.1; SAMPLING = 0.3; noiseFac = 1 * 1e-6;

% Number of rows and columns

numr = 1000; numc = 1000; probSize = [numr,numc];

% Rank of the underlying matrix.

truerank = 5;

% Size of vectorized matrix

N = numr*numc;

% Number of samples that we will reveal.

M = round(SAMPLING * N);

% The left and right factors which make up our true data matrix Y.

YL = randn(numr,truerank);

YR = randn(numc,truerank);

% Select a random set of M entries of Y.

p = randperm(N); idx = p(1:M); clear p;

[I,J] = ind2sub([numr,numc],idx(1:M)); [J, inxs]=sort(J'); I=I(inxs)';

% S denotes the values of Y at the locations indexed by I and J.

S = sum(YL(I,:).*YR(J,:),2);

% Add dense Gaussian noise.

noise = noiseFac*max(S)*randn(size(S));

S = S + noise;

% Add sparse outliers

outlier_magnitude = max(abs(S));

idx = randperm(length(S));

sparseIdx = idx(1:ceil(outlierFac*length(S)))';

Outlier_part = outlier_magnitude * randn(size(sparseIdx));

S(sparseIdx) = S(sparseIdx) + Outlier_part;

%% Now we set parameters for the algorithm.

% We set the number of cycles and put the necessary parameters into OPTIONS

maxCycles = 10; % the max cycles of robust mc

OPTIONS.QUIET = 1; % suppress the debug information

OPTIONS.MAX_LEVEL = 20; % For multi-level step-size,

OPTIONS.MAX_MU = 15; % For multi-level step-size

OPTIONS.MIN_MU = 1; % For multi-level step-size

OPTIONS.DIM_M = numr; % your data's ambient dimension

OPTIONS.RANK = truerank; % give your estimated rank

OPTIONS.ITER_MIN = 20; % the min iteration allowed for ADMM at the beginning

OPTIONS.ITER_MAX = 20; % the max iteration allowed for ADMM

OPTIONS.rho = 2; % ADMM penalty parameter for acclerated convergence

OPTIONS.TOL = 1e-8; % ADMM convergence tolerance

OPTIONS.USE_MEX = 1; % If you do not have the mex-version of Alg 2

% please set Use_mex = 0.

CONVERGE_LEVLE = 20; % If status.level >= CONVERGE_LEVLE, robust mc converges

%% % % % % % % % % % % % % % % % % % % % %

% Now run robust matrix completion.

t0 = clock;

[Usg, Vsg, Osg] = grasta_mc(I,J,S,numr,numc,maxCycles,CONVERGE_LEVLE,OPTIONS);

StochTime = etime(clock,t0);

% % % % % % % % % % % % % % % % % % % % % %

% Compute errors.

% % % % % % % % % % % % % % % % % % % % % %

fprintf(' Results: \n');

% Select a random set of entries of Y

p = randperm(N);

idx2 = p(1:M);

clear p;

ITest = mod(idx2(1:M)-1,numr)'+1; % row values

JTest = floor((idx2(1:M)-1)/numr)'+1; % column values

% STest denotes the values of Y at the locations indexed by ITest and JTest

STest = sum(YL(ITest,:).*YR(JTest,:),2);

% % % % % % % % % % % % % % % % % % % % % %

% Error for Robust MC

dev = sum(Usg(ITest,:).*Vsg(JTest,:),2) - STest;

RelDevStochG = norm(dev,'fro') / norm(STest,'fro');

fprintf(1,'GRASTA robust mc rel.err on test = %7.2e in %4.2f seconds\n', RelDevStochG, StochTime);

Real-time Video Background/Foreground Separation

This pseudo-Matlab code is to show how to incorporate GRASTA into your video application in streaming fashion. Detailed information please refer to our code package.

clear all; clc; close all;

%% predefine your video parameters

VIDEO_ROWS = 320; % Row of each frame

VIDEO_COLS = 240; % Column of each fram

DIM = VIDEO_ROWS * VIDEO_COLS;

%% GRASTA parameters

subsampling = 0.1; % how much patial information will be used in your application

OPTIONS.RANK = 5; % the estimated rank

OPTIONS.rho = 1.8;

OPTIONS.MAX_MU = 10000; % set max_mu large enough for initial subspace training

OPTIONS.MIN_MU = 1;

OPTIONS.ITER_MAX = 20;

OPTIONS.DIM_M = DIM; % your data's dimension

OPTIONS.USE_MEX = 1; % If you do not have the mex-version of Alg 2

% please set Use_mex = 0.

OPTS = struct(); % initiate a empty struct for OPTS

status.init = 0; % status of grasta at each iteration

U_hat = zeros(1); % initiate a zero U_hat at beginning

%% Initial subspace training [optional]

% You may use some frames to train the initial subspace.

OPTIONS.CONSTANT_STEP = 0; % use adaptive step-size for initial subspace training

max_cycles = 10;

training_frames = 200;

for outiter = 1:max_cycles,

frame_order = randperm(training_frames);

for i=1:training_frames,

% prepare the training frame

% I = imread(fname);

I = double(rgb2gray(I));

I = I/max(max(I));

% random subsampling the frame I

M = round(subsampling * DIM);

p = randperm(DIM);

idx = p(1:M)';

I_Omega = I(idx);

% training the background

[U_hat, status, OPTS] = grasta_stream(I_Omega, idx, U_hat, status, OPTIONS, OPTS);

end

fprintf('Training %d/%d ...\n',outiter, max_cycles);

end

%% Real-time background/foreground separation

OPTIONS.CONSTANT_STEP = 1e-2; % use small constant step-size

while some_quit_condition,

% prepare the image I whether it is saved as file or caputured by

% camera

% I = imread(fname);

I = double(rgb2gray(I));

I = I/max(max(I));

% random subsampling the frame I

M = round(subsampling * DIM);

p = randperm(DIM);

idx = p(1:M)';

I_Omega = I(idx);

% tracking the background

[U_hat, status, OPTS] = grasta_stream(I_Omega, idx, U_hat, status, OPTIONS, OPTS);

% bg_img is the background

bg_img = reshape(U_hat * status.w * status.SCALE, VIDEO_ROWS,VIDEO_COLS);

% s_img is the separated foreground

s_hat = I(:) - U_hat * status.w * status.SCALE;

s_img = reshape(s_hat,VIDEO_ROWS,VIDEO_COLS);

end

License

The source code for the GRASTA package can be distributed and/or modified under the terms of the GNU Lesser General Public License (LGPL) as published by the Free Software Foundation, either version 3 of the License or (at your option) any later version.

For other usage, please contact the authors.

References

[1] Jun He, Laura Balzano, and John C.S. Lui. Online robust subspace tracking from partial information. Preprint available at http://arxiv.org/pdf/1109.3827v2., 2011.

[2] Jun He, Laura Balzano, and Arthur Szlam. Incremental gradient on the grassmannian for online foreground and background separation in subsampled video. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2012. (PDF) (Slides by Laura)