Full Surround Monodepth
from Multiple Cameras

Vitor Guizilini* Igor Vasiljevic* Rares Ambrus Greg Shakhnarovich Adrien Gaidon

Abstract. Self-supervised monocular depth and ego-motion estimation is a promising approach to replace or supplement expensive depth sensors such as LiDAR for robotics applications like autonomous driving. However, most research in this area focuses on a single monocular camera or stereo pairs that cover only a fraction of the scene around the vehicle. In this work, we extend monocular self-supervised depth and ego-motion estimation to large-baseline multi-camera rigs. Using generalized spatio-temporal contexts, pose consistency constraints, and carefully designed photometric loss masking, we learn a single network generating dense, consistent, and scale-aware point clouds that cover the same full surround 360 degree field of view as a typical LiDAR scanner. We also propose a new scale-consistent evaluation metric more suitable to multi-camera settings. Experiments on two challenging benchmarks illustrate the benefits of our approach over strong baselines.

Contributions:

  • We demonstrate, for the first time, self-supervised learning of scale-aware and consistent depth networks in wide-baseline multi-camera settings, which we refer to as Full Surround Monodepth (FSM).

  • We introduce key techniques to extend self-supervised depth and ego-motion learning to wide-baseline multi-camera systems: multi-camera spatio-temporal contexts and pose consistency constraints, as well as study the impact of non-overlapping and self-occlusion photometric masking in this novel setting.

  • We ablate and show the benefits of our proposed approach on two publicly available multi-camera datasets: DDAD and nuScenes.

Citation

@inproceedings{tri_fsm_ral22,

author = {Vitor Guizilini and Igor Vasiljevic and Rares Ambrus and Greg Shakhnarovich and Adrien Gaidon},

title = {Full Surround Monodepth from Multiple Cameras},

booktitle = {Robotics and Automation Letters (RA-L)},

year = {2022},

}