- [pdf] [supp] [arXiv]
S2FGAN: Semantically Aware Interactive Sketch-To-Face Translation
Interactive facial image manipulation attempts to edit single and multiple face attributes using a photo-realistic face and/or semantic mask as input. In the absence of the photo-realistic image (only sketch/mask available), previous methods only retrieve the original face but ignore the potential of aiding model controllability and diversity in the translation process. This paper proposes a sketch-to-image generation framework called S2FGAN, aiming to improve users' ability to interpret and flexibility of face attribute editing from a simple sketch. First, to restore a vivid face from a sketch, we propose semantic level perceptual loss to increase the translation quality. Second, we dedicate the theoretic analysis of attribute editing and build attribute mapping networks with latent semantic loss to modify latent space semantics of Generative Adversarial Networks (GANs). The users can command the model to retouch the generated images by involving the semantic information in the generation process. In this way, our method can manipulate single or multiple face attributes by only specifying attributes to be changed. Extensive experimental results on the CelebAMask-HQ dataset empirically show our superior performance and effectiveness on this task. Our method successfully outperforms state-of-the-art sketch-to-image generation and attribute manipulation methods by exploiting greater control of attribute intensity.