MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network

Yi Wei, Zhe Gan, Wenbo Li, Siwei Lyu, Ming-Ching Chang, Lei Zhang, Jianfeng Gao, Pengchuan Zhang; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


We present Mask-guided Generative Adversarial Network (MagGAN) for high-resolution face attribute editing, in which semantic facial masks from a pre-trained face parser are used to guide the finegrained image editing process. With the introduction of a mask-guided reconstruction loss, MagGAN learns to only edit the facial parts that are relevant to the desired attribute changes, while preserving the attributeirrelevant regions (e.g., hat, scarf for modification 'To Bald'). Further, a novel mask-guided conditioning strategy is introduced to incorporate the influence region of each attribute change into the generator. In addition, a multi-level patch-wise discriminator structure is proposed to scale our model for high-resolution (1024 x 1024) face editing. Experiments on the CelebA benchmark show that the proposed method significantly outperforms prior state-of-the-art approaches in terms of both image quality and editing performance.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wei_2020_ACCV, author = {Wei, Yi and Gan, Zhe and Li, Wenbo and Lyu, Siwei and Chang, Ming-Ching and Zhang, Lei and Gao, Jianfeng and Zhang, Pengchuan}, title = {MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }