Learning Identity-Invariant Motion Representations for Cross-ID Face Reenactment

Po-Hsiang Huang, Fu-En Yang, Yu-Chiang Frank Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 7084-7092

Abstract


Human face reenactment aims at transferring motion patterns from one face (from a source-domain video) to an-other (in the target domain with the identity of interest).While recent works report impressive results, they are notable to handle multiple identities in a unified model. In this paper, we propose a unique network of CrossID-GAN to perform multi-ID face reenactment. Given a source-domain video with extracted facial landmarks and a target-domain image, our CrossID-GAN learns the identity-invariant motion patterns via the extracted landmarks and such information to produce the videos whose ID matches that of the target domain. Both supervised and unsupervised settings are proposed to train and guide our model during training.Our qualitative/quantitative results confirm the robustness and effectiveness of our model, with ablation studies confirming our network design.

Related Material


[pdf] [video]
[bibtex]
@InProceedings{Huang_2020_CVPR,
author = {Huang, Po-Hsiang and Yang, Fu-En and Wang, Yu-Chiang Frank},
title = {Learning Identity-Invariant Motion Representations for Cross-ID Face Reenactment},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}