PU-GCN: Point Cloud Upsampling via Graph Convolutional Network

Under Review

Abstract

The effectiveness of learning-based point cloud upsampling pipelines heavily relies on the upsampling modules and feature extractors used therein. We propose three novel point upsampling modules: Multi-branch GCN, Clone GCN, and NodeShuffle. Our modules use Graph Convolutional Networks (GCNs) to better encode local point information from the point neighborhood.

These upsampling modules are versatile and can be incorporated into any point cloud upsampling pipeline. Extensive experiments show how these modules consistently improve state-of-the-art upsampling methods. We also propose a new multi-scale point feature extractor, called Inception DenseGCN. By aggregating features at multiple scales, this feature extractor enables further performance gain in the final upsampled point clouds. We combine Inception DenseGCN with one of our upsampling modules (NodeShuffle) into a new point upsampling pipeline: PU-GCN. We show qualitatively and quantitatively the significant advantages of PU-GCN over the state-of-the-art.

Methodology

Figure 1. Our proposed upsampling modules. (a) Multi-branch GCN: We apply $r$ different GCN layers to the input nodes. The outputs from each GCN are concatenated node-wise. (b) Clone GCN: We pass the input through r GCN layers with shared weights, and concatenate their outputs. (c) NodeShuffle: We expand the number of input features using a GCN layer. We later apply a shuffle operation to rearrange the feature map.

Figure 2. Our proposed Inception DenseGCN. We use the parameters (k, d, c) to define a DenseGCN block. k is the number of neighbors (kernel size), d is the dilation rate, and c is the number of output channels. KNN is applied at the first layer to build the graph and the node neighborhoods.

Figure 2. Our proposed PU-GCN Arhictecture. PU-GCN uses a dense feature extractor consisting of 1 or more densely connected Inception DenseGCN blocks, followed by the upsampler and coordinate reconstructor.

PU660 Dataset

Carousel imageCarousel imageCarousel image

Samples from PU660. The first and second slides contain training samples, and the third slide contains testing samples.

PU660 is a new dataset we compile for point cloud upsampling. It consists of 620 3D models split into 551 training samples and 69 testing samples. The training set contains 171 3D models compiled from the datasets used by PU-Net, 3PU, and PU-GAN, in addition to 380 different models collected from ShapeNet. The testing set contains 39 models compiled from the datasets used by PU-Net, 3PU, and PU-GAN and 30 more models from ShapeNet. The models from ShapeNet were randomly chosen from 10 different categories and 450 different models with various complexities to encourage diversity. Overall, PU600 covers a great semantic range of 3D objects and includes simple, as well as complex shapes.

PU660 is available here [Google Drive (coming soon)].

Experiments

Figure 3. Qualitative upsampling (x4) results. We show the upsampled point clouds of input (a) when processed by different upsampling methods (b)-(d) and our proposed PU-GCN (e). The ground truth upsampled point clouds are in (f) for reference. All the inputs are taken from the PU600 test set and have 2048 points. PU-GCN produces the best results overall, while preserving fine-grained local details (refer to close-ups).

Figure 4. Upsampling real-scanned point clouds from KITTI dataset. PU-GCN preserves intricate structures in objects in real-scanned data and generates fine-grained details (the window of the truck, the pedestrians, the cyclist, etc). Please zoom in to see details.

Table 1. Performance comparison of PU-GCN vs. the state-of-the-art on PU600. PU-GCN outperforms PU-Net and 3PU. We do not compare against PU-GAN on this dataset, since we were unable to reproduce PU-GAN results from the publicly available code. Bold denotes the best performance.

Table 2. Performance comparison of PU-GCN vs. the state-of-the-art on PU-GAN's dataset. Our PU-GCN outperforms PU-Net, 3PU and PU-GAN in most cases, regardless of the input point cloud size. Bold denotes the best performance.

Table 2. Effect of Inception DenseGCN and global pooling. Using a single Inception DenseGCN block performs better than using the dynamic GCN feature extractor in 3PU. Global pooling further improves performance. Increasing the number of Inception DenseGCN blocks tends to improve PU-GCN performance as well. PU-GCN† uses a single Inception DenseGCN block in its feature extractor. Here, NodeShuffle is used as the upsampling module, the dataset is PU600, the upsampling ratio is x4, and the input size is 2048 points.

Table 3. Ablation study on upsampling modules. Results show that our upsampling modules (Multi-Branch GCN, Clone GCN, and NodeShuffle) can transfer well to different upsampling architectures in the literature. Replacing the original upsampling module with one of the proposed ones improves upsampling performance overall. PU-GCN† uses a single Inception DenseGCN block in its feature extractor. Bold denotes the best performance for each architecture.

Please cite our paper if you find anything helpful:

@misc{Qian2019pugcn,title={PU-GCN: Point Cloud Upsampling via Graph Convolutional Network},author={Guocheng Qian and Abdulellah Abualshour and Guohao Li and Ali Thabet and Bernard Ghanem},year={2019},eprint={1912.03264},archivePrefix={arXiv},primaryClass={cs.CV}}