A Decoupled 3D Facial Shape Model by Adversarial Training

Victoria Fernandez Abrevaya, Adnane Boukhayma, Stefanie Wuhrer, Edmond Boyer; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9419-9428

Abstract


Data-driven generative 3D face models are used to compactly encode facial shape data into meaningful parametric representations. A desirable property of these models is their ability to effectively decouple natural sources of variation, in particular identity and expression. While factorized representations have been proposed for that purpose, they are still limited in the variability they can capture and may present modeling artifacts when applied to tasks such as expression transfer. In this work, we explore a new direction with Generative Adversarial Networks and show that they contribute to better face modeling performances, especially in decoupling natural factors, while also achieving more diverse samples. To train the model we introduce a novel architecture that combines a 3D generator with a 2D discriminator that leverages conventional CNNs, where the two components are bridged by a geometry mapping layer. We further present a training scheme, based on auxiliary classifiers, to explicitly disentangle identity and expression attributes. Through quantitative and qualitative results on standard face datasets, we illustrate the benefits of our model and demonstrate that it outperforms competing state of the art methods in terms of decoupling and diversity.

Related Material


[pdf] [supp] [video]
[bibtex]
@InProceedings{Abrevaya_2019_ICCV,
author = {Abrevaya, Victoria Fernandez and Boukhayma, Adnane and Wuhrer, Stefanie and Boyer, Edmond},
title = {A Decoupled 3D Facial Shape Model by Adversarial Training},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}