LightDepth: Single-View Depth Self-Supervision from Illumination Decline
J. Rodríguez-Puigvert*, V.M. Batlle*, J.M.M. Montiel, R. Martinez-Cantin, P. Fua, J.D. Tardós, J. Civera
ICCV 2023
Abstract
Single-view depth estimation can be remarkably effective if there is enough ground-truth depth data for supervised training. However, there are scenarios, especially in medicine in the case of endoscopies, where suchdata cannot be obtained. In such cases, multi-view self-supervision and synthetic-to-real transfer serve as alternative approaches, however, with a considerable performance reduction in comparison to supervised case. Instead, we propose a single-view self-supervised method that achieves a performance similar to the supervised case. In some medical devices, such as endoscopes, the camera and light sources are co-located at a small distance from the target surfaces. Thus, we can exploit that, for any given albedo and surface orientation, pixel brightness is inversely proportional to the square of the distance to the surface, providing a strong single-view self-supervisory signal. In our experiments, our self-supervised models deliver accuracies comparable to those of fully supervised ones, while being applicable without depth ground-truth data.
Real Colonoscopy
Real Gastroscopy
Video:
Citation:
@InProceedings{rodriguez2023lightdepth,
title={{LightDepth:} Single-View Depth Self-Supervision from Illumination Decline},
author={Rodríguez-Puigvert*, Javier and Batlle*, Víctor M. and Montiel, José María M. and Martínez-Cantín, Rubén and Fua, Pascal and Tardós, Juan D. and Civera, Javier},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2023}
}