Expression Transfer Using Flow-Based Generative Models

Andrea Valenzuela, Carlos Segura, Ferran Diego, Vicenc Gomez; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 1023-1031

Abstract


Among the different deepfake generation techniques, flow-based methods appear as natural candidates. Due to the property of invertibility, flow-based methods eliminate the necessity of person-specific training and are able to reconstruct any input image almost perfectly to human perception. We present a method for deepfake generation based on facial expression transfer using flow-based generative models. Our approach relies on simple latent vector operations akin to the ones used for attribute manipulation, but for transferring expressions between identity source-target pairs. We show the feasibility of this approach using a pre-trained Glow model and small sets of source and target images, not necessarily considered during prior training. We also provide an evaluation pipeline of the generated images in terms of similarities between identities and Action Units encoding the expression to be transferred. Our results show that an efficient expression transfer is feasible by using the proposed approach setting up a first precedent in deepfake content creation, and its evaluation, independently of the training identities.

Related Material


[pdf]
[bibtex]
@InProceedings{Valenzuela_2021_CVPR, author = {Valenzuela, Andrea and Segura, Carlos and Diego, Ferran and Gomez, Vicenc}, title = {Expression Transfer Using Flow-Based Generative Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {1023-1031} }