Style Transfer for 2D Talking Head Generation

Trong Thang Pham, Tuong Do, Nhat Le, Ngan Le, Hung Nguyen, Erman Tjiputra, Quang Tran, Anh Nguyen; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 7500-7509

Abstract


Audio-driven talking head animation is a challenging research topic with many real-world applications. Recent works have focused on creating photo-realistic 2D animation while learning different talking or singing styles remains an open problem. In this paper we present a new method to generate talking head animation with learnable style references. Given a set of style reference frames our framework can reconstruct 2D talking head animation based on a single input image and an audio stream. Our method first produces facial landmarks motion from the audio stream and constructs the intermediate style patterns from the style reference images. We then feed both outputs into a style-aware image generator to generate the photo-realistic and fidelity 2D animation. In practice our framework can extract the style information of a specific character and transfer it to any new static image for talking head animation. The intensive experimental results show that our method achieves better results than recent state-of-the-art approaches qualitatively and quantitatively. Our source code will be made publicly available.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Pham_2024_CVPR, author = {Pham, Trong Thang and Do, Tuong and Le, Nhat and Le, Ngan and Nguyen, Hung and Tjiputra, Erman and Tran, Quang and Nguyen, Anh}, title = {Style Transfer for 2D Talking Head Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7500-7509} }