From attribute-labels to faces: face generation using a conditional generative adversarial network

Yaohui Wang, Antitza Dantcheva, Francois Bremond; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Facial attributes are instrumental in semantically characterizing faces. Automated classification of such attributes (i.e., age, gender, ethnicity) has been a well studied topic. We here seek to explore the inverse problem, namely given attribute-labels the generation of attributeassociated faces. The interest in this topic is fueled by related applications in law enforcement and entertainment. In this work, we propose two models for attribute-label based facial image and video generation incorporating 2D and 3D deep conditional generative adversarial networks (DCGAN). The attribute-labels serve as a tool to determine the specific representations of generated images and videos. While these are early results, our findings indicate the methods’ ability to generate realistic faces from attribute labels.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2018_ECCV_Workshops,
author = {Wang, Yaohui and Dantcheva, Antitza and Bremond, Francois},
title = {From attribute-labels to faces: face generation using a conditional generative adversarial network},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}