DyadGAN: Generating Facial Expressions in Dyadic Interactions

Yuchi Huang, Saad M. Khan; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 11-18

Abstract


Generative Adversarial Networks (GANs) have been shown to produce synthetic images of compelling realism. In this work, we present a conditional GAN approach to generate contextually valid facial expressions in dyadic interactions. Contrary to previous work using conditions related to facial attributes of generated identities, we focus on dyads to model the influence of one person's facial expressions to the reaction of the other. We introduce a two level model of GANs in interviewer-interviewee interactions. In the first stage dynamic face sketches of interviewers are generated conditioned on expressions of the interviewee; in the second stage face images are synthesized from the face sketches. We demonstrate that our model is effective at synthesizing visually compelling face images in dyadic interactions. Moreover we quantitatively show that the facial expressions depicted in the generated interviewer face images reflect valid emotional reactions to the interviewee behavior.

Related Material


[pdf]
[bibtex]
@InProceedings{Huang_2017_CVPR_Workshops,
author = {Huang, Yuchi and Khan, Saad M.},
title = {DyadGAN: Generating Facial Expressions in Dyadic Interactions},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}