Learning to Generate Dense Point Clouds With Textures on Multiple Categories

Tao Hu, Geng Lin, Zhizhong Han, Matthias Zwicker; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 2170-2179

Abstract


3D reconstruction from images is a core problem in computer vision. With recent advances in deep learning, it has become possible to recover plausible 3D shapes even from single RGB images. However, obtaining detailed geometry and texture for objects with arbitrary topology remains challenging. In this paper, we propose a novel approach for reconstructing point clouds from RGB images. Unlike other methods, we can recover dense point clouds with hundreds of thousands of points, and we also include RGB textures. In addition, we train our model on multiple categories, which leads to superior generalization to unseen categories compared to previous techniques. We achieve this using a two-stage approach, where we first infer an object coordinate map from the input RGB image, and then obtain the final point cloud using a reprojection and completion step. We show results on standard benchmarks that demonstrate the advantages of our technique.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Hu_2021_WACV, author = {Hu, Tao and Lin, Geng and Han, Zhizhong and Zwicker, Matthias}, title = {Learning to Generate Dense Point Clouds With Textures on Multiple Categories}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {2170-2179} }