Neural Points: Point Cloud Representation With Neural Fields for Arbitrary Upsampling

Wanquan Feng, Jin Li, Hongrui Cai, Xiaonan Luo, Juyong Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 18633-18642

Abstract


In this paper, we propose Neural Points, a novel point cloud representation and apply it to the arbitrary-factored upsampling task. Different from traditional point cloud representation where each point only represents a position or a local plane in the 3D space, each point in Neural Points represents a local continuous geometric shape via neural fields. Therefore, Neural Points contain more shape information and thus have a stronger representation ability. Neural Points is trained with surface containing rich geometric details, such that the trained model has enough expression ability for various shapes. Specifically, we extract deep local features on the points and construct neural fields through the local isomorphism between the 2D parametric domain and the 3D local patch. In the final, local neural fields are integrated together to form the global surface. Experimental results show that Neural Points has powerful representation ability and demonstrate excellent robustness and generalization ability. With Neural Points, we can resample point cloud with arbitrary resolutions, and it outperforms the state-of-the-art point cloud upsampling methods. Code is available at https://github.com/WanquanF/NeuralPoints.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Feng_2022_CVPR, author = {Feng, Wanquan and Li, Jin and Cai, Hongrui and Luo, Xiaonan and Zhang, Juyong}, title = {Neural Points: Point Cloud Representation With Neural Fields for Arbitrary Upsampling}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {18633-18642} }