LUCAS: Layered Universal Codec Avatars

Di Liu, Teng Deng, Giljoo Nam, Yu Rong, Stanislav Pidhorskyi, Junxuan Li, Jason Saragih, Dimitris N. Metaxas, Chen Cao; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 21127-21137

Abstract


Photorealistic 3D head avatar reconstruction faces critical challenges in modeling dynamic face-hair interactions and achieving cross-identity generalization, particularly during expressions and head movements. We present LUCAS, a novel Universal Prior Model (UPM) for codec avatar modeling that disentangles face and hair through a layered representation. Unlike previous UPMs that treat hair as an integral part of the head, our approach separates the modeling of the hairless head and hair into distinct branches. LUCAS is the first to introduce a mesh-based UPM, facilitating real-time rendering on devices. LUCAS can be integrated with Gaussian Splatting to enhance visual fidelity, a feature particularly beneficial for rendering complex hairstyles. Experimental results indicate that LUCAS outperforms existing single-mesh and Gaussian-based avatar models in both quantitative and qualitative assessments, including evaluations on held-out subjects in zero-shot driving scenarios. LUCAS demonstrates superior dynamic performance in managing head pose changes, expression transfer, and hairstyle variations, thereby advancing the state-of-the-art in 3D head avatar reconstruction.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Liu_2025_CVPR, author = {Liu, Di and Deng, Teng and Nam, Giljoo and Rong, Yu and Pidhorskyi, Stanislav and Li, Junxuan and Saragih, Jason and Metaxas, Dimitris N. and Cao, Chen}, title = {LUCAS: Layered Universal Codec Avatars}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {21127-21137} }