Neural Points: Point Cloud Representation with Neural Fields for Arbitrary Upsampling

CVPR 2022

Wanquan Feng1      Jin Li2      Hongrui Cai1      Xiaonan Luo2      Juyong Zhang1

1University of Science and Technology of China     2Guilin University of Electronic Technology

We propose a novel point cloud representation named Neural Points and apply it to the arbitrary-factored upsampling task. For the input point cloud, a discrete point-wise local patch is represented via local continuous neural fields, and the global continuous Neural Points surface is constructed by integrating all the local neural fields. Arbitrary resolutions of point cloud can be generated by sampling on the constructed continuous Neural Points surface.

Abstract

In this paper, we propose Neural Points, a novel point cloud representation and apply it to the arbitrary-factored upsampling task. Different from traditional point cloud representation where each point only represents a position or a local plane in the 3D space, each point in Neural Points represents a local continuous geometric shape via neural fields. Therefore, Neural Points contain more shape information and thus have a stronger representation ability. Neural Points is trained with surface containing rich geometric details, such that the trained model has enough expression ability for various shapes. Specifically, we extract deep local features on the points and construct neural fields through the local isomorphism between the 2D parametric domain and the 3D local patch. In the final, local neural fields are integrated together to form the global surface. Experimental results show that Neural Points has powerful representation ability and demonstrate excellent robustness and generalization ability. With Neural Points, we can resample point cloud with arbitrary resolutions, and it outperforms the state-of-the-art point cloud upsampling methods.

Neural Points Representation

Point cloud is the discrete representation of its underlying continuous surface . For the traditional point cloud representation, where each point only represents a 3D position, its representation ability totally depends on its resolution. Therefore, one direct strategy to improve its representation ability is to do point cloud upsampling:

However, the upsampling manner in the above equation is discrete-to-discrete, where the upsampled result is still discrete and limited by the resolution. Neural Points representation employs the discrete-to-continuous strategy, and overcomes the limitation of resolution :

The pipeline is shown in the figure on the top of this page. Given the input point cloud, we first construct local neural fields for each local patch, which is based on local parameterization. Local neural field and the bijective mapping function between isomorphic 3D and 2D domain is shown as:

Some local patches generated from the Neural Fields are visualized as (the first row shows the underlying surface from which we extract the local points; the second row shows some zoom-in local parts of the generated Neural Field patches):

The local neural fields are integrated together to form the global shape. With the constructed continuous neural representation, we can resample an arbitrary number of points.

Results & Comparisons

Demo: Upsampling Results with Arbitrary Factors

The above demo shows the upsampling results with arbitrary factors, where the first and last columns show the input LR point cloud and ground truth HR point cloud. Our upsampling results are listed in the middle column, whose upsampling factor varies from 1 to 16 continuously. We can see that the performance keeps excellent for the arbitrary factors.

Results & comparisons on Sketchfab dataset

The above figure shows the results and comparisons on the Sketchfab dataset. The error metric CD () is also given in the bottom. For better visualization, we zoom-in some local parts of the results and choose the appropriate views to show the details.

Results & comparisons on PU-GAN dataset

The above figure shows the results and comparisons on the PU-GAN dataset. The error metric CD () is also given in the bottom. Some local parts are displayed for better comparison.

Results with Large Upsampling Factor

The above figure shows the results with the large upsampling factor (). The results of comparing methods contain some flaws that are easy to observe while our result can keep in a good quality with the large upsampling factor.

Results on Real Captured Data

The above figure shows the results on our capture depth images of human face with the depth sensor equipped on iPhone X, verifying the robustness and effectiveness of Neural Points to real captured data.

Citation

@article{feng2022np,
    author    = {Wanquan Feng and Jin Li and Hongrui Cai and Xiaonan Luo and Juyong Zhang},
    title     = {Neural Points: Point Cloud Representation with Neural Fields for Arbitrary Upsampling},
    booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2022}
}