Robust image reconstruction from multi-view measurements

If you use this code, please cite the following paper: Puy et al., "Robust image reconstruction from multi-view measurements," SIAM Journal on Imaging Sciences, vol. 7, num. 1, pp. 128–156, 2014.

Abstract: We propose a novel method to accurately reconstruct a set of images representing a single scene from few linear multi-view measurements. Each observed image is modeled as the sum of a background image and a foreground one. The background image is common to all observed images but undergoes geometric transformations, as the scene is observed from different viewpoints. In this paper, we assume that these geometric transformations are represented by a few parameters, e.g., translations, rotations, affine transformations, etc.. The foreground images differ from one observed image to another, and are used to model possible occlusions of the scene. The proposed reconstruction algorithm estimates jointly the images and the transformation parameters from the available multi-view measurements. The ideal solution of this multi-view imaging problem minimizes a non-convex functional, and the reconstruction technique is an alternating descent method built to minimize this functional. The convergence of the proposed algorithm is studied, and conditions under which the sequence of estimated images and parameters converges to a critical point of the non-convex functional are provided. Finally, the efficiency of the algorithm is demonstrated using numerical simulations for applications such as compressed sensing or super-resolution.

Acknowledgements:

  • We use in this code the SPARCO toolbox, which can be downloaded at http://www.cs.ubc.ca/labs/scl/sparco/. A description of the toolbox is available in: E. Berg, M. P. Friedlander, G. Hennenfent, F. Herrmann, R. Saab, and Ö. Yılmaz, "Sparco: A testing framework for sparse reconstruction", Technical report TR-2007-20, 2007.

  • The original images used in the robust image alignment example are available at http://perception.csl.illinois.edu/matrix-rank/rasl.html. These images were originally used in Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, "RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images", IEEE Trans. Pattern Anal. Mach. Intell., vol. 34(11), pp. 2233-2246, 2012.

  • The original images used in the compressed sensing example come from the castle-R20 dataset available at http://cvlab.epfl.ch/~strecha/multiview/rawMVS.html. This dataset was originally used in C. Strecha, W. von Hansen, L. Van Gool, P. Fua, and U. Thoennessen, "On benchmarking camera calibration and multi-view stereo for high resolution imagery", in IEEE Conf. Computer Vision and Pattern Recognition, 2008, pp. 1 – 8.

  • The images used in the super resolution example are available at http://users.soe.ucsc.edu/~milanfar/software/sr-datasets.html (Credit: Peyman Milanfar)