Learning Based Approach To Depth Reconstruction

Depth reconstruction of translucent objects from a single ToF camera using deep residual networks

People

Seongjong Song and Hyunjung Shim

Abstract

We propose a novel approach to recovering the translucent objects from a single time-of-flight (ToF) depth camera using deep residual networks. When recording the translucent objects using the ToF depth camera, their depth values are severely contaminated due to complex light interactions with the surrounding environment. While existing methods suggested new capture systems or developed the depth distortion models, their solutions were less practical because of strict assumptions and heavy computational complexity. In this paper, we adopt the deep residual networks for modeling the ToF depth distortion caused by translucency. To fully utilize both the local and semantic information of objects, multiscale patches are used to predict the depth value. Based on the quantitative evaluation on our benchmark database, we show the effectiveness of the proposed algorithm.

Overview

Results

Related Work

Hyunjung Shim and Seungkyu Lee. Recovering translucent objects using a single time-of-flight depth camera. In TCSVT, 2015. [PDF]

Kyungmin Kim and Hyunjung Shim. Robust approach to reconstructing transparent objects using a time-of-flight depth camera. In Optics Express, 2017. [PDF] [Webpage]

Acknowledgements

This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (IITP-2017-2017-0-01015) supervised by the IITP (Institute for Information & Communications Technology Promotion), and by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the MSIP (NRF-2016R1A2B4016236).


Publication

Depth Reconstruction of Translucent Objects from a Single Time-of-Flight Camera using Deep Residual Networks

S Song, H Shim, Asian Conference on Computer Vision (ACCV), 2018, Dec