Joint Motion Estimation and Blind Deblurring

Intra-Frame Deblurring by Leveraging Inter-Frame Camera Motion

-- Joint Motion and Blur Estimation

Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015

Abstract: Camera motion introduces motion blur, degrading the quality of video. A video deblurring method is proposed based on two observations: (i) camera motion within capture of each individual frame leads to motion blur; (ii) camera motion between frames yields inter-frame mis-alignment that can be exploited for blur removal. The proposed method effectively leverages the information distributed across multiple video frames due to camera motion, jointly estimating the motion between consecutive frames and blur within each frame. This joint analysis is crucial for achieving effective restoration by leveraging temporal information. Extensive experiments are carried out on synthetic data as well as real-world blurry videos. Comparisons with several state-of-the-art methods verify the effectiveness of the proposed method.

Figure 1. Illustration of the proposed approach. (a-b) two blurry video frames (c) optical flow estimated directly on the blurry images

(d-f) the output of the proposed method: (d-e) deblurred video frames and blur kernels (f) optical flow (with color encoding) jointly estimated

with blur using the proposed method. For comparison, the ground-truth flow estimated on the original sharp images is shown on the top-right of (f).

Figure 2. Blurry video observation model. For video deblurring, to estimate a high-quality video frame x (indicated with red rectangle), we can exploit information from two sources: (i) direct observation y, which is linked to x via the blur operator H (ii) consecutive frames (y_{-1} and y_{1}), which are connected to x via a motion based transformation and blurring. Both sources of information can be described by the observation model .

Results

Blurry Frame

Cho et al.

Our result

Related Publication

Haichao Zhang and Jianchao Yang, Intra-Frame Deblurring by Leveraging Inter-Frame Camera Motion, CVPR 2015 Paper

References

[1] Y.-W. Tai, P. Tan, and M. S. Brown. Richardson-Lucy deblurring for scenes under a projective motion path. IEEE TPAMI., 33(8):1603–1618, 2011.

[2] S. Cho, J Wang, and S. Lee. Video Deblurring for hand-held cameras using patch-based synthesis. ACM Trans. Graph., 31(4):64, 2012.

[3] L. Xu, S. Zheng and J. Jia. Unnatual L0 sparse representation for natural image deblurring, In CVPR, 2013

[4] H. Zhang, D. Wipf, and Y. Zhang. Multi-image blind deblurring using a coupled adaptive sparse prior. In CVPR, 2013.

[5] H. Zhang and D. Wipf. Non-Uniform Camera Shake Removal Using a Spatially Adaptive Sparse Penalty, In NIPS, 2013

[6] T. Portz, L. Zhang and H. Jiang. Optical flow in the presence of spatially-varying motion blur, In CVPR, 2012

[7] F. Sroubek and P. Milanfar. Robust multichannel blind deconvolution via fast alternating minimization, In IEEE TIP, 21(4):1687-1700, 2012

[8] D. Sun, S. Roth, and M.J. Black. Secrets of optical flow estimation and their principles. In CVPR, 2010

[9] H. Zhang and L. Carin. Multi-Shot Imaging: Joint Alignment, Deblurring and Resolution Enhancement, In CVPR, 2014