DR2: Disentangled Recurrent Representation Learning for Data-Efficient Speech Video Synthesis

Chenxu Zhang, Chao Wang, Yifan Zhao, Shuo Cheng, Linjie Luo, Xiaohu Guo; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 6204-6214

Abstract


Although substantial progress has been made in audio-driven talking video synthesis, there still remain two major difficulties: existing works 1) need a long sequence of training dataset (>1h) to synthesize co-speech gestures, which causes a significant limitation on their applicability; 2) usually fail to generate long sequences, or can only generate long sequences without enough diversity. To solve these challenges, we propose a Disentangled Recurrent Representation Learning framework to synthesize long diversified gesture sequences with a short training video of around 2 minutes. In our framework, we first make a disentangled latent space assumption to encourage unpaired audio and pose combinations, which results in diverse "one-to-many" mappings in pose generation. Next, we apply a recurrent inference module to feed back the last generation as initial guidance to the next phase, enhancing the long-term video generation of full continuity and diversity. Comprehensive experimental results verify that our model can generate realistic synchronized full-body talking videos with training data efficiency.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhang_2024_WACV, author = {Zhang, Chenxu and Wang, Chao and Zhao, Yifan and Cheng, Shuo and Luo, Linjie and Guo, Xiaohu}, title = {DR2: Disentangled Recurrent Representation Learning for Data-Efficient Speech Video Synthesis}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {6204-6214} }