SinGS: Animatable Single-Image Human Gaussian Splats with Kinematic Priors

Yufan Wu, Xuanhong Chen, Wen Li, Shunran Jia, Hualiang Wei, Kairui Feng, Jialiang Chen, Yuhan Li, Ang He, Weimin Zhang, Bingbing Ni, Wenjun Zhang; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 5571-5580

Abstract


Despite significant advances in accurately estimating geometry in contemporary single-image 3D human reconstruction, creating a high-quality, efficient, and animatable 3D avatar remains an open challenge. Two key obstacles persist: incomplete observation and inconsistent 3D priors. To address these challenges, we propose SinGS, aiming to achieve high-quality and efficient animatable 3D avatar reconstruction. At the heart of SinGS are two key components: Kinematic Human Diffusion and Geometry-Preserving 3D Gaussain Splatting. The former is a foundational human model that samples within pose space to generate a highly 3D-consistent and high-quality sequence of human images, inferring unseen viewpoints and providing kinematic priors. The latter is a system that reconstructs a compact, high-quality 3D avatar even under imperfect priors, achieved through a novel semantic Laplacian regularization and a geometry-preserving density control strategy that enable precise and compact assembly of 3D primitives. Extensive experiments demonstrate that SinGS enables lifelike, animatable human reconstructions, maintaining both high quality and inference efficiency (up to 70FPS).

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wu_2025_CVPR, author = {Wu, Yufan and Chen, Xuanhong and Li, Wen and Jia, Shunran and Wei, Hualiang and Feng, Kairui and Chen, Jialiang and Li, Yuhan and He, Ang and Zhang, Weimin and Ni, Bingbing and Zhang, Wenjun}, title = {SinGS: Animatable Single-Image Human Gaussian Splats with Kinematic Priors}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {5571-5580} }