PU-GCN: Point Cloud Upsampling via Graph Convolutional Network

Under Review

Abstract

Upsampling sparse, noisy, and non-uniform point clouds is a challenging task. In this paper, we propose 3 novel point upsampling modules: Multi-branch GCN, Clone GCN, and NodeShuffle. Our modules use Graph Convolutional Networks (GCNs) to better encode local point information. Our upsampling modules are versatile and can be incorporated into any point cloud upsampling pipeline. We show how our 3 modules consistently improve state-of-the-art methods in all point upsampling metrics.

We also propose a new multi-scale point feature extractor, called Inception DenseGCN. We modify current Inception GCN algorithms by introducing DenseGCN blocks. By aggregating data at multiple scales, our new feature extractor is more resilient to density changes along point cloud surfaces. We combine Inception DenseGCN with one of our upsampling modules (NodeShuffle) into a new point upsampling pipeline: PU-GCN.

We show both qualitatively and quantitatively the advantages of PU-GCN against the state-of-the-art in terms of fine-grained upsampling quality and point cloud uniformity.


Methodology

Figure 1. Our proposed upsampling modules. (a) Multi-branch GCN: We apply $r$ different GCN layers to the input nodes. The outputs from each GCN are concatenated node-wise. (b) Clone GCN: We pass the input through r GCN layers with shared weights, and concatenate their outputs. (c) NodeShuffle: We expand the number of input features using a GCN layer. We later apply a shuffle operation to rearrange the feature map.

Figure 2. Our proposed Inception DenseGCN (left) and PU-GCN (right). We use the parameters (k, d, c) to define a DenseGCN block. k is the number of neighbors (kernel size), d is the dilation rate, and c is the number of output channels. KNN is applied at the first layer to build the graph and the node neighborhoods.

PU660 Dataset

Carousel imageCarousel imageCarousel image

Samples from PU660. The first and second are training samples, and the third are testing samples.

PU660 is a new dataset we complie for point cloud upsampling. It consists of 660 3D models split into 551 training samples and 109 testing samples. The training set contains 171 3D models compiled from the datasets used by PU-Net, 3PU , and PU-GAN, in addition to 380 different models collected from ShapeNet. The test set contains 39 models compiled from the datasets used by PU-Net, 3PU, and PU-GAN and 70 more models from ShapeNet. The models from ShapeNet were randomly chosen from 10 different categories and 450 different shapes with various complexities to encourage diversity. Overall, PU660 covers a great semantic range of 3D objects and includes simple, as well as complex shapes.

PU660 is available here [Google Driver (coming soon)].

Experiments

Figure 3. Comparing point cloud upsampling (x4) and surface reconstruction results produced by different methods (b-e) from inputs (a). Our PU-GCN produces the best results overall, with uniform and fine-grained detailed upsampled point clouds. The reconstructed surfaces are smoother with less wrinkles or bulges and maintain the intricate structures of the original shape.

Table 1. Performance comparison of our PU-GCN with state-of-the-art. We remove the farthest sampling module in PU-GAN for fair comparison, and refer to this architecture as PU-GAN*. Our PU-GCN using two Inception DenseGCN blocks outperforms PU-Net, 3PU and PU-GAN*, particularly for sparse input. Although PU-GAN uses the farthest point sampling strategy, we outperform their method on some metrics using dense inputs and all metrics using sparse inputs. Bold denotes the best performance.

Table 2. Ablation study on upsampling modules on PU660 using sparse (256) input. Experiments show that our upsampling modules can transfer/generalize to different architectures. By simply replacing the original upsampling module with one of our proposed ones, the performance of different architectures all improve. PU-GCN† uses a single Inception DenseGCN block in its feature extractor. Bold denotes the best performance for each architecture: PU-Net, 3PU, PU-GAN and our PU-GCN.

Table 3. Ablation study on the effect of Inception DenseGCN and global pooling using sparse (256) input on PU660. Using a single Inception DenseGCN block in PU-GCN† outperforms the architecture integrating the dynamic GCN feature extractor used in 3PU. The global pooling layer in our feature extractor improves performance. By increasing the number of Inception DenseGCN blocks, we observe further improvement in PU-GCN. Bold denotes the best performance.

Please cite our paper if you find anything helpful:

@misc{Qian2019pugcn,title={PU-GCN: Point Cloud Upsampling via Graph Convolutional Network},author={Guocheng Qian and Abdulellah Abualshour and Guohao Li and Ali Thabet and Bernard Ghanem},year={2019},eprint={1912.03264},archivePrefix={arXiv},primaryClass={cs.CV}}