Towards Variable and Coordinated Holistic Co-Speech Motion Generation

Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, Changxing Ding; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 1566-1576

Abstract


This paper addresses the problem of generating lifelike holistic co-speech motions for 3D avatars focusing on two key aspects: variability and coordination. Variability allows the avatar to exhibit a wide range of motions even with similar speech content while coordination ensures a harmonious alignment among facial expressions hand gestures and body poses. We aim to achieve both with ProbTalk a unified probabilistic framework designed to jointly model facial hand and body movements in speech. ProbTalk builds on the variational autoencoder (VAE) architecture and incorporates three core designs. First we introduce product quantization (PQ) to the VAE which enriches the representation of complex holistic motion. Second we devise a novel non-autoregressive model that embeds 2D positional encoding into the product-quantized representation thereby preserving essential structure information of the PQ codes. Last we employ a secondary stage to refine the preliminary prediction further sharpening the high-frequency details. Coupling these three designs enables ProbTalk to generate natural and diverse holistic co-speech motions outperforming several state-of-the-art methods in qualitative and quantitative evaluations particularly in terms of realism. Our code and model will be released for research purposes at https://feifeifeiliu.github.io/probtalk/.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Liu_2024_CVPR, author = {Liu, Yifei and Cao, Qiong and Wen, Yandong and Jiang, Huaiguang and Ding, Changxing}, title = {Towards Variable and Coordinated Holistic Co-Speech Motion Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {1566-1576} }