A Novel Depth-Based Virtual View Synthesis Method for Free Viewpoint Video

People

Ilkoo Ahn and Changick Kim

Abstract

Free-viewpoint rendering (FVR) has become a popular topic in 3D research. A promising technology in FVR is to generate virtual views using a single texture image and the corresponding depth image. A critical problem that occurs when generating virtual views is that the regions covered by the foreground objects in the original view may be disoccluded in the synthesized views. In this paper, a depth based disocclusion filling algorithm using patch based texture synthesis is proposed. In contrast to the existing patch based virtual view synthesis methods, the filling priority is driven by the robust structure tensor and the epipolar directional term. Moreover, the bestmatched patch is searched in the background regions and finally the best-matched patch is chosen by considering the color similarity and some factors such as the epipolar line and the magnitude of data term. Superiority of the proposed method over the existing methods is proved by comparing the experimental results. 

Notice

A preliminary version of this paper is accepted as oral presentations (around Top 13%) in IEEE International Conference on Multimedia & Expo (ICME) 2012.

Ilkoo Ahn and Changick Kim, "Depth-based Disocclusion Filling for Virtual View Synthesis," IEEE International Conference on Multimedia & Expo (ICME), pp.109-114, Melbourne, Australia, Jul. 9-13, 2012. 


Demo video 

demo.avi

Source code

Experimental Results

                                              (a)                                                                                     (b)
                                              (c)                                                                                     (d)
                                              (e)                                                                                     (f)
                                              (g)                                                                                     (h)
                                              (i)                                                                                     (j)
Illustration of the experimental results of the proposed and other methods for the "Ballet" sequence (first frame). (a)(b) Warped texture images from V5 to V4 and to V2 respectively (White regions are disocclusions). (c)(d) The results of Criminisi's method [24]. (e)(f) The results of Daribo's method [25]. (g)(h) The results of Gautier's method [26] (i)(j) The results of the proposed method.

References
[24] A. Criminisi, P. Perez, and K. Toyama, "Region filling and object removal by exemplar-based image inpainting," IEEE Transactions on Image Processing, vol. 13, no. 9, pp. 1200-1212, 2004.
[25] I. Daribo and H. Saito, "A novel inpainting-based layered depth video for 3dtv," IEEE Transactions on Broadcasting, vol. 57, no. 2, pp. 533-541, 2011.
[26] J. Gautier, O. L. Meur, and C. Guillemot, "Depth-based image completion for view synthesis," in Proc. of 3DTV Conference, 2011, pp. 1-4.
ċ
codeOpen_ver6.zip
(2078k)
Ilkoo Ahn,
Feb 7, 2015, 4:11 AM
č
v5_v4_20111214_patchProcess1.avi
(2578k)
Ilkoo Ahn,
Feb 7, 2015, 4:11 AM