-
[pdf]
[bibtex]@InProceedings{Fu_2023_ICCV, author = {Fu, Zhiheng and Wang, Longguang and Xu, Lian and Wang, Zhiyong and Laga, Hamid and Guo, Yulan and Boussaid, Farid and Bennamoun, Mohammed}, title = {VAPCNet: Viewpoint-Aware 3D Point Cloud Completion}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {12108-12118} }
VAPCNet: Viewpoint-Aware 3D Point Cloud Completion
Abstract
Most existing learning-based 3D point cloud completion methods ignore the fact that the completion process is highly coupled with the viewpoint of a partial scan. However, the various viewpoints of incompletely scanned objects in real-world applications are normally unknown and directly estimating the viewpoint of each incomplete object is usually time-consuming and leads to huge annotation cost. In this paper, we thus propose an unsupervised viewpoint representation learning scheme for 3D point cloud completion without explicit viewpoint estimation. To be specific, we learn abstract representations of partial scans to distinguish various viewpoints in the representation space rather than the explicit estimation in the 3D space. We also introduce a Viewpoint-Aware Point cloud Completion Network (VAPCNet) with flexible adaption to various viewpoints based on the learned
representations. The proposed viewpoint representation learning scheme can extract discriminative representations to obtain accurate viewpoint information. Reported experiments on two popular public datasets show that our VAPCNet achieves state-of-the-art performance for the point cloud completion task. Source code is available at https://github.com/FZH92128/VAPCNet.
Related Material