GPAvatar: High-fidelity Head Avatars by Learning Efficient Gaussian Projections

Wei-Qi Feng, Dong Han, Ze-Kang Zhou, Shunkai Li, Xiaoqiang Liu, Pengfei Wan, Di Zhang, Miao Wang; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 250-259

Abstract


Existing radiance field-based head avatar methods have mostly relied on pre-computed explicit priors (e.g., mesh, point) or neural implicit representations, making it challenging to achieve high fidelity with both computational efficiency and low memory consumption. To overcome this, we present GPAvatar, a novel and efficient Gaussian splatting-based method for reconstructing high-fidelity dynamic 3D head avatars from monocular videos. We extend Gaussians in 3D space to a high-dimensional embedding space encompassing Gaussian's spatial position and avatar expression, enabling the representation of the head avatar with arbitrary pose and expression. To enable splatting-based rasterization, a linear transformation is learned to project each high-dimensional Gaussian back to the 3D space, which is sufficient to capture expression variations instead of using complex neural networks. Furthermore, we propose an adaptive densification strategy that dynamically allocates Gaussians to regions with high expression variance, improving the facial detail representation. Experimental results on three datasets show that our method outperforms existing state-of-the-art methods in rendering quality and speed while reducing memory usage in training and rendering.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Feng_2025_CVPR, author = {Feng, Wei-Qi and Han, Dong and Zhou, Ze-Kang and Li, Shunkai and Liu, Xiaoqiang and Wan, Pengfei and Zhang, Di and Wang, Miao}, title = {GPAvatar: High-fidelity Head Avatars by Learning Efficient Gaussian Projections}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {250-259} }