SIGGRAPH 2015

A Computational Approach for Obstruction-Free Photography

Tianfan Xue1*        Michael Rubinstein2*        Ce Liu2*        William T. Freeman1,2

1MIT CSAIL            2Google Research


* Part of this work was done while Michael Rubinstein and Ce Liu were at Microsoft Research, and when Tianfan Xue was an intern at Microsoft Research New England.


 
In this paper we present an algorithm for taking pictures through reflective or occluding elements such as windows and fences. The input to our algorithm is a set of images taken by the user while slightly scanning the scene with a camera/phone (a), and the output is two images: a clean image of the (desired) background scene, and an image of the reflected or occluding content (b). Our algorithm is fully automatic, can run on mobile devices, and allows taking pictures through common visual obstacles, producing images as if they were not there.

Abstract

We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows.


@article{Xue2015ObstructionFree,
  author = {Tianfan Xue and Michael Rubinstein and Ce Liu and William T. Freeman},
  title = {A Computational Approach for Obstruction-Free Photography},
  journal = {ACM Transactions on Graphics (Proc. SIGGRAPH)},
  year = {2015},
  volume = {34},
  number = {4},
}


Paper:  PDF  (32MB)
Technical Report:  PDF 
Presentation: PPTX (slides only, 153 MB), ZIP (slides+ video, 762 MB)

SIGGRAPH 2015 Supplemental Video (if you cannot open YouTube, download video at 1080p, 720p):
    

Data

Input sequences and results: .zip (280mb)

The zip file contains the following files for each sequence:
<sequence_name>_fullseq.mp4:  The captured video sequence.
<sequence_name>_input.avi: The sampled frames from the video that are used for the processing, stored as a 1 fps avi file.
<sequence_name>_input.png: The reference input frame (the middle frame within the sampled frames).
<sequence_name>_bg.png: The recovered background layer (the layer with dominant motion).
<sequence_name>_bg.avi:  The recovered background layer warped according to the estimated background motions, stored as a 1 fps avi file.
<sequence_name>_rf.png: The recovered obstruction layer.
<sequence_name>_rf.avi:  The recovered obstruction layer warped according to the estimation foreground motions, stored as a 1 fps avi file.

"gallery" is an image sequence from "Image-Based Rendering for Scenes with Reflections", Sinha et al. 2012, and doesn't contain a "_fullseq.mp4" file.



Controlled experiments with ground-truth decomposition: .zip (32mb)

The zip file contains the following files for each sequence:
 <sequence_name>_input.png:  The input sequence. The third frame is selected as the reference frame.
 <sequence_name>_bg.png:  Ground-truth background layer.
 <sequence_name>_rf.png:  Ground-truth obstruction layer.


Acknowledgements
We thank Dr. Rick Szeliski for useful discussions and feedback, and the anonymous SIGGRAPH reviewers for their comments. We thank Katie Bouman for narrating our video. Tianfan Xue is supported by Shell Research and ONR MURI 6923196.