DHG-GAN: Diverse Image Outpainting via Decoupled High Frequency Semantics

Yiwen Xu, Maurice Pagnucco, Yang Song; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 3977-3993

Abstract


Diverse image outpainting aims to restore large missing regions surrounding a known region while generating multiple plausible results. Although existing outpainting methods have demonstrated promising quality of image reconstruction, they are ineffective for providing both diverse and realistic content. This paper proposes a Decoupled High-frequency semantic Guidance-based GAN (DHG-GAN) for diverse image outpainting with the following contributions. 1) We propose a two-stage method, in which the first stage generates high-frequency semantic images for guidance of structural and textural information in the outpainting region and the second stage is a semantic completion network for completing the image outpainting based on this semantic guidance. 2) We design spatially varying stylemaps to enable targeted editing of high-frequency semantics in the outpainting region to generate diverse and realistic results. We evaluate the photorealism and quality of the diverse results generated by our model on CelebA-HQ, Place2 and Oxford Flower102 datasets. The experimental results demonstrate large improvement over state-of-the-art approaches.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Xu_2022_ACCV, author = {Xu, Yiwen and Pagnucco, Maurice and Song, Yang}, title = {DHG-GAN: Diverse Image Outpainting via Decoupled High Frequency Semantics}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {3977-3993} }