Scan2LoD3
Reconstructing semantic 3D building models at LoD3
using ray casting and Bayesian networks
Olaf Wysocki, Yan Xia, Magdalena Wysocki, Eleonora Grilli, Ludwig Hoegner, Daniel Cremers, Uwe Stilla
Reconstructing semantic 3D building models at LoD3
using ray casting and Bayesian networks
Olaf Wysocki, Yan Xia, Magdalena Wysocki, Eleonora Grilli, Ludwig Hoegner, Daniel Cremers, Uwe Stilla
Scan2LoD3: Our method reconstructs detailed semantic 3D building models; Its backbone is laser rays’ physics providing geometrical cues enhancing semantic segmentation accuracy.
The workflow of the proposed Scan2LoD3 consists of three parallel branches:
The first is generating the point cloud probability map based on a modified Point Transformer network (top);
the second is producing a conflicts probability map from the visibility of the laser scanner in conjunction with a 3D building model (middle);
and the third is using Mask-RCNN to obtain a texture probability map from 2D images.
We then fuse three probability maps with a Bayesian network to obtain the final facade-level segmentation, enabling a CityGML-compliant LoD3 building model reconstruction.
Visibility analysis using laser scanning observations and 3D models on a voxel grid. The ray is traced from the sensor position si to the hit point pi.
The voxel is: empty if the ray traverses it; occupied when it contains pi; unknown if unmeasured; confirmed when occupied voxel intersects with vector plane; and conflicted when the plane intersects with an empty voxel.
The Bayesian network architecture comprising three input nodes (blue), one target node (yellow), and a conditional probability table (CPT) with the assigned combinations’ weights.