Generative Adversarial Network with Spatial Attention for Face Attribute Editing

Gang Zhang, Meina Kan, Shiguang Shan, Xilin Chen; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 417-432

Abstract


Face attribute editing aims at editing the face image with the given attribute. Most existing works employ Generative Adversarial Network (GAN) to operate face attribute editing. However, these methods inevitably change the attribute-irrelevant regions, as shown in Fig.~ ef{fig1}. Therefore, we introduce the spatial attention mechanism into GAN framework (referred to as SaGAN), to only alter the attribute-specific region and keep the rest unchanged. Our approach SaGAN consists of a generator and a discriminator. The generator contains an attribute manipulation network (AMN) to edit the face image, and a spatial attention network (SAN) to localize the attribute-specific region which restricts the alternation of AMN within this region. The discriminator endeavors to distinguish the generated images from the real ones, and classify the face attribute. Experiments demonstrate that our approach can achieve promising visual results, and keep those attribute-irrelevant regions unchanged. Besides, our approach can benefit the face recognition by data augmentation.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2018_ECCV,
author = {Zhang, Gang and Kan, Meina and Shan, Shiguang and Chen, Xilin},
title = {Generative Adversarial Network with Spatial Attention for Face Attribute Editing},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}