Realistic Dynamic Facial Textures From a Single Image Using GANs

Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, Hao Li; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5429-5438

Abstract


We present a novel method to realistically puppeteer and animate a face from a single RGB image using a source video sequence. We begin by fitting a multilinear PCA model to obtain the 3D geometry and a single texture of the target face. In order for the animation to be realistic, however, we need dynamic per-frame textures that capture subtle wrinkles and deformations corresponding to the animated facial expressions. This problem is highly underconstrained, as dynamic textures cannot be obtained directly from a single image. Furthermore, if the target face has a closed mouth, it is not possible to obtain actual images of the mouth interior. To address this issue, we train a Deep Generative Network that can infer realistic per-frame texture deformations, including the mouth interior, of the target identity using the per-frame source textures and the single target texture. By retargeting the PCA expression geometry from the source, as well as using the newly inferred texture, we can both animate the face and perform video face replacement on the source video using the target appearance.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Olszewski_2017_ICCV,
author = {Olszewski, Kyle and Li, Zimo and Yang, Chao and Zhou, Yi and Yu, Ronald and Huang, Zeng and Xiang, Sitao and Saito, Shunsuke and Kohli, Pushmeet and Li, Hao},
title = {Realistic Dynamic Facial Textures From a Single Image Using GANs},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}