Generative Sparse-View Gaussian Splatting

Hanyang Kong, Xingyi Yang, Xinchao Wang; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 26745-26755

Abstract


Novel view synthesis from limited observations remains a significant challenge due to the lack of information in under-sampled regions, often resulting in noticeable artifacts. We introduce Generative Sparse-view Gaussian Splatting (GS-GS), a general pipeline designed to enhance the rendering quality of 3D/4D Gaussian Splatting (GS) when training views are sparse. Our method generates unseen views using generative models, specifically leveraging pre-trained image diffusion models to iteratively refine view consistency and hallucinate additional images at pseudo views. This approach improves 3D/4D scene reconstruction by explicitly enforcing semantic correspondences during the generation of unseen views, thereby enhancing geometric consistency--unlike purely generative methods that often fail to maintain view consistency. Extensive evaluations on various 3D/4D datasets--including Blender, LLFF, Mip-NeRF360, and Neural 3D Video--demonstrate that our GS-GS outperforms existing state-of-the-art methods in rendering quality without sacrificing efficiency.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Kong_2025_CVPR, author = {Kong, Hanyang and Yang, Xingyi and Wang, Xinchao}, title = {Generative Sparse-View Gaussian Splatting}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {26745-26755} }