Very Long Natural Scenery Image Prediction by Outpainting

Zongxin Yang, Jian Dong, Ping Liu, Yi Yang, Shuicheng Yan; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 10561-10570

Abstract


Comparing to image inpainting, image outpainting receives less attention due to two challenges in it. The first challenge is how to keep the spatial and content consistency between generated images and original input. The second challenge is how to maintain high quality in generated results, especially for multi-step generations in which generated regions are spatially far away from the initial input. To solve the two problems, we devise some innovative modules, named Skip Horizontal Connection and Recurrent Content Transfer, and integrate them into our designed encoder-decoder structure. By this design, our network can generate highly realistic outpainting prediction effectively and efficiently. Other than that, our method can generate new images with very long sizes while keeping the same style and semantic content as the given input. To test the effectiveness of the proposed architecture, we collect a new scenery dataset with diverse, complicated natural scenes. The experimental results on this dataset have demonstrated the efficacy of our proposed network.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2019_ICCV,
author = {Yang, Zongxin and Dong, Jian and Liu, Ping and Yang, Yi and Yan, Shuicheng},
title = {Very Long Natural Scenery Image Prediction by Outpainting},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}