There exist many algorithms which generate mesh models from regular 3D point data obtained by accurate 3D acquisition methods, such as laser scanning. However, these algorithms cannot guarantee the result when it is applied to 3D data obtained from 3D vision-based techniques, since the 3D points obtained by 3D vision tech. usually contain noises and errors. Our objective is to generate a panoramic 3D mesh model from unorganized 3D points of the scene.
The input to the proposed method is several sets of point clouds obtained from different viewpoints. An integrated mesh model is generated from the input point clouds. Firstly, we partition the input point cloud to sub-point clouds according to each camera’s viewing frustum. Then, we sample the partitioned sub-point clouds adaptively and triangulate the sampled point cloud. Finally, we merge all triangulated models of sub-point clouds to represent the whole indoor scene as a single model.
Our method considers occlusion between two adjacent views and it filters out invisible part of point cloud without any prior knowledge. While preserving the features of the scene, adaptive sampling reduces the size of resulting mesh model for practical usage. The proposed method is modularized and applicable to the other modeling applications which handle multiple range data.
From the left, input point cloud which has overlapping area, newly generated point cloud after adaptive sampling, and the final textured model
3D reconstruction results
Occlusion effect as the view rotates
W. Lee and W. Woo, "Panoramic Mesh Model Generation from Multiple Range Data for Indoor Scene Reconstruction," 6th Pacific-Rim Conference on Multimedia, LNCS Vol. 3768, pp. 1004-1014, 2005.
W. Lee and W. Woo, "PANORAMIC MESH MODEL GENERATION FROM UNORGANIZED POINT CLOUD FOR INDOOR ENVIRONMENT MODELING", Proc. of International Conference on Artificial Reality and Telexistence (ICAT2004), pp.598-603, 2004.