TexVocab: Texture Vocabulary-conditioned Human Avatars

Yuxiao Liu, Zhe Li, Yebin Liu, Haoqian Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 1715-1725

Abstract


To adequately utilize the available image evidence in multi-view video-based avatar modeling we propose TexVocab a novel avatar representation that constructs a texture vocabulary and associates body poses with texture maps for animation. Given multi-view RGB videos our method initially back-projects all the available images in the training videos to the posed SMPL surface producing texture maps in the SMPL UV domain. Then we construct pairs of human poses and texture maps to establish a texture vocabulary for encoding dynamic human appearances under various poses. Unlike the commonly used joint-wise manner we further design a body-part-wise encoding strategy to learn the structural effects of the kinematic chain. Given a driving pose we query the pose feature hierarchically by decomposing the pose vector into several body parts and interpolating the texture features for synthesizing fine-grained human dynamics. Overall our method is able to create animatable human avatars with detailed and dynamic appearances from RGB videos and the experiments show that our method outperforms state-of-the-art approaches.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Liu_2024_CVPR, author = {Liu, Yuxiao and Li, Zhe and Liu, Yebin and Wang, Haoqian}, title = {TexVocab: Texture Vocabulary-conditioned Human Avatars}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {1715-1725} }