RGB2Point: 3D Point Cloud Generation from Single RGB Images

Jae Joong Lee, Bedrich Benes; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 2952-2962

Abstract


We introduce RGB2Point an unposed single-view RGB image to a 3D point cloud generation based on Transformer. RGB2Point takes an input image of an object and generates a dense 3D point cloud. Contrary to prior works based on CNN layers and diffusion-denoising approaches we use pre-trained Transformer layers that are fast and generate high-quality point clouds with consistent quality over available categories. Our generated point clouds demonstrate high quality on a real-world dataset as evidenced by improved Chamfer distance (51.15%) and Earth Mover's distance (36.17%) metrics compared to the current state-of-the-art. Additionally our approach shows a better quality on a synthetic dataset achieving better Chamfer distance (39.26%) Earth Mover's distance (26.95%) and F-score (47.16%). Moreover our method produces 63.1% more consistent high-quality results across various object categories compared to prior works. Furthermore RGB2Point is computationally efficient requiring only 2.3GB of VRAM to reconstruct a 3D point cloud from a single RGB image and our implementation generates the results 15133x faster than a SOTA diffusion-based model.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Lee_2025_WACV, author = {Lee, Jae Joong and Benes, Bedrich}, title = {RGB2Point: 3D Point Cloud Generation from Single RGB Images}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {2952-2962} }