A Decomposition into Low rank plus Additive Tensors Approach

RPCA Tensor Decomposition

Background subtraction (BS) is the art of separating moving objects from their background. The Background Modeling (BM) is one of the main steps of the BS process. Several subspace learning (SL) algorithms based on matrix and tensor tools have been used to perform the BM of the scenes. However, several SL algorithms work on a batch process increasing memory consumption when data size is very large. Moreover, these algorithms are not suitable for streaming data when the full size of the data is unknown. In this work, we propose an incremental tensor subspace learning that uses only a small part of the entire data and updates the low-rank model incrementally when new data arrive. In addition, the multi-feature model allows us to build a robust low-rank background model of the scene. Experimental results shows that the proposed method achieves interesting results for background subtraction task.

A. Sobral, C. Baker, T. Bouwmans, E. Zahzah, “Incremental and Multi-feature Tensor Subspace Learning applied for Background Modeling and Subtraction”, International Conference on Image Analysis and Recognition, ICIAR 2014, October 2014.

Stochastic RPCA Tensor Decomposition

Background subtraction (BS) is a very important task for various computer vision applications. Higher-Order Robust Principal Component Analysis (HORPCA) based robust tensor recovery or decomposition provides a very nice potential for BS. The BG sequence is then modeled by underlying lowdimensional subspace called low-rank while the sparse tensor constitutes the foreground (FG) mask. However, traditional tensor based decomposition methods are sensitive to outliers and due to the batch optimization methods, high dimensional data should be processed. As a result, huge memory usage and computational issues arise in earlier approaches which are not desirable for real-time systems. In order to tackle these challenges, we apply the idea of stochastic optimization on tensor for robust low-rank and sparse error separation. Only one sample per time instance is processed from each unfolding matrices of tensor in our scheme to separate the low-rank and sparse component and update the low dimensional basis when a new sample is revealed. This iterative multi-dimensional tensor data optimization scheme for decomposition is independent of the number of samples and hence it reduces the memory and computational complexities. Experimental evaluations on both synthetic and real-world datasets demonstrate the robustness and comparative performance of our approach as compared to its batch counterpart without scarificing the online processing. (more information)
S. Javed, T. Bouwmans, S. Jung, “Stochastic Decomposition into Low Rank and Sparse Tensor for Robust Background Subtraction”, ICDP 2015, July 2015

A. Sobral, S. Javed, S.  Jung, T. Bouwmans, E. Zahzah, "Online Stochastic Tensor Decomposition for Background Subtraction in Multispectral Video Sequences", Workshop on Robust Subspace Learning and Computer Vision, ICCV 2015, Santiago, Chile, December 2015.

Low-rank Tensor Decomposition

This work present an extension of an online tensor decomposition into low-rank and sparse components using a maximum norm constraint. Since, maximum norm regularizer is more robust than nuclear norm against large number of outliers, therefore the proposed extended tensor based decomposition framework with maximum norm provides an accurate estimation of background scene. Experimental evaluations on synthetic data as well as real dataset such as Scene Background Modeling Initialization (SBMI) show encouraging performance for the task of background modeling as compared to the state of the art approaches.

S. Javed, T. Bouwmans, S. Jung, “SBMI-LTD: Stationary Background Model Initialization based on Low-rank Tensor Decomposition”, ACM Symposium on Applied Computing, SAC 2017, 2017.