Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction

Seungtae Nam, Xiangyu Sun, Gyeongjin Kang, Younggeun Lee, Seungjun Oh, Eunbyung Park; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 26683-26693

Abstract


Generalized feed-forward Gaussian models have made significant strides in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets. However, these models often struggle to represent high-frequency details primarily due to the limited number of Gaussians. While the densification strategy from per-scene 3D Gaussian splatting can be adapted for feed-forward models, it typically requires tens of thousands of optimization steps to reconstruct fine details and can easily lead to overfitting in sparse-view scenarios. In this paper, we propose Generative Densification, an efficient and generalizable densification strategy specifically tailored for feed-forward models. Instead of iteratively splitting and cloning raw Gaussian parameters, our method up-samples feature representations produced by feed-forward models and generates their corresponding fine Gaussians in a single forward pass, leveraging learned prior knowledge for enhanced generalization. Experimental results on both object-level and scene-level reconstruction tasks demonstrate that our method outperforms state-of-the-art approaches with comparable or smaller model sizes, achieving notable improvements in representing fine details.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Nam_2025_CVPR, author = {Nam, Seungtae and Sun, Xiangyu and Kang, Gyeongjin and Lee, Younggeun and Oh, Seungjun and Park, Eunbyung}, title = {Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {26683-26693} }