Towards Scalable Multi-View Reconstruction of Geometry and Materials

Carolin Schmitt Bozidar Antic Andrei Neculai Joo Ho Lee Andreas Geiger

Autonomous Vision Group, University of Tübingen and Max Planck Institute for Intelligent Systems, Tübingen, Germany

We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes that exceed object-scale and hence cannot be captured with stationary light stages. The input are high-resolution RGB-D images captured by a mobile, hand-held capture system with point lights for active illumination.

Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. To facilitate scalability to large numbers of observation views and optimization variables, we introduce a distributed optimization algorithm that reconstructs 2.5D keyframe-based representations of the scene. A novel multi-view consistency regularizer effectively synchronizes neighboring keyframes such that the local optimization results allow for seamless integration into a globally consistent 3D model. We provide a study on the importance of each component in our formulation and show that our method compares favorably to baselines. We further demonstrate that our method accurately reconstructs various objects and materials and allows for expansion to spatially larger scenes.

We believe that this work represents a significant step towards making geometry and material estimation from hand-held scanners scalable.

Reconstruction Quality

For test views, we demonstrate the accuracy of our reconstruction by showing the captured observation, our prediction and the photometric loss image side by side:

Full 3D Reconstruction Videos

See geometry and material reconstructions of objects and scenes on the scale of up to 3 meters and a resolution of ≤ 2mm.

The videos show the renderings into novel views under new illumination, followed by the rendered depth map (shaded based on estimated surface normals) and predicted parameter maps of: normals, diffuse albedo, specular albedo and roughness.












3 Objects






Contact and to get more information on the project.