Efficient Depth Enhancement using a Combination of Color and Depth Information
Kyungjae Lee, Yuseok Ban, and Sangyoun Lee
Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
Abstract
Studies on depth images containing three-dimensional information have been performed for many practical applications. However, the depth images acquired from depth sensors have inherent problems such as missing values and noisy boundaries. These problems significantly affect the performance of applications that use a depth image as their input. This paper describes a depth enhancement algorithm based on a combination of color and depth information. To fill depth holes and recover object shapes, asynchronous cellular automata with neighborhood distance maps are used. Image segmentation and a weighted linear combination of spatial filtering algorithms are applied to extract object regions and fill disocclusion in the object regions. Experimental results on both real-world and public datasets show that the proposed method enhances the quality of the depth image with low computational complexity, outperforming conventional methods on a number of metrics. Furthermore, to verify the performance of the proposed method, we present stereoscopic images generated by the enhanced depth image to illustrate the improvement in quality.
Paper
Citation
[BibTeX]
LEE, Kyungjae; BAN, Yuseok; LEE, Sangyoun. "Efficient Depth Enhancement Using a Combination of Color and Depth Information." Sensors 17.7 (2017): 1544.
Results
Experimental results using the GrowFill, conducted on the Tsukuba Stereo Dataset
[31] Lin, B.S.; Su, M.J.; Cheng, P.H.; Tseng, P.J.; Chen, S.J. Temporal and Spatial Denoising of Depth Maps. Sensors 2015, 15, 18506–18525.
[33] Gong, X.; Liu, J.; Zhou, W.; Liu, J. Guided depth enhancement via a fast marching method. Image Vis. Comput. 2013, 31, 695–703.
[34] Telea, A. An image inpainting technique based on the fast marching method. J. Graph. Tools 2004, 9, 23–34.
Examples of depth enhancement using the proposed method
Additional examples of depth enhancement using the GrowFill, conducted on the ASUS Xtion Pro dataset [*]
The results of the GrowFill (row 3: von Neumann (N4), row 4: Moore (N8))
using the color images (row 1) and noisy depth maps (row 2) as the inputs
Computation time (sec on CPU i7-4790K 4.0GHz)
N4: 0.028 / 0.031 / 0.038 / 0.031 / 0.036 / 0.027
N8: 0.057 / 0.076 / 0.065 / 0.068 / 0.088 / 0.076
[*]: LU, Si; REN, Xiaofeng; LIU, Feng. Depth enhancement via low-rank matrix completion. In: CVPR. 2014. [link]
Dataset used in our paper
Tsukuba Stereo Dataset [link]
Peris, M.; Martull, S.; Maki, A.; Ohkawa, Y.; Fukui, K. Towards a simulation driven stereo vision system. In Proceedings of the 2012 21st International Conference on Pattern Recognition (ICPR), Tsukuba, Japan, 11–15 November 2012; pp. 1038–1042.
Martull, S.; Peris, M.; Fukui, K. Realistic CG stereo image dataset with ground truth disparity maps. In Proceedings of the ICPR Workshop TrakMark2012, Tsukuba, Japan, 11 November 2012; Volume 111, pp. 117–118.
Kinect Dataset
Camplani, M.; Salgado, L. Background foreground segmentation with RGB-D Kinect data: An efficient combination of classifiers. J. Vis. Commun. Image Represent. 2014, 25, 122–136.
Moyà-Alcover, G.; Elgammal, A.; Jaume-i Capó, A.; Varona, J. Modeling depth for nonparametric foreground segmentation using RGBD devices. Pattern Recognit. Lett. 2016, in press.
Fernandez-Sanchez, E.J.; Diaz, J.; Ros, E. Background subtraction based on color and depth using active sensors. Sensors 2013, 13, 8895–8915.