Passive depth sensing in computer vision is gaining traction due to advantages over active sensors, such as lower cost, reduced power consumption, enhanced portability, and greater depth range. However, these benefits come with increased computational complexity. Traditional geometric methods like multi-view stereo and structure from motion achieve high-quality 3D reconstructions but often struggle with capturing surface and sub-surface reflectance properties, leading to incomplete reconstructions.
Photometric methods, including Photometric Stereo and Shape from Polarization, offer a solution by enabling finer geometric detail recovery and the joint estimation of shape, material, and reflectance properties. These techniques extend beyond opaque surfaces to include transparent, translucent, and textureless objects. They also facilitate more complete 3D reconstructions by leveraging changes in light direction and density due to refraction within objects—an aspect often overlooked by traditional geometric approaches. Understanding how refraction modifies image acquisition geometry enhances the depth and accuracy of 3D scene analysis, presenting a rich area for future research.
This tutorial aims to provide the computer vision and graphics community with fresh perspectives on reconstructing the geometry, appearance, and reflectance of complex scenes through photometric inverse rendering. It will offer practical insights for applying these methods in real-world scenarios, paving the way for advancements across various applications such as physics-based rendering, machine vision, and virtual reality. By addressing these challenges, the tutorial extensively explores techniques that expand the possibilities in 3D shape recovery.