Image Multi-Inpainting via Progressive Generative Adversarial Networks
Image inpainting task aims to recover missing pixels naturally and realistically. However, previous deep learning approaches requires specific design for different types of masks and cannot generalize well to to complicated inpainting scenarios. Therefore in addition to most common stroke-type mask, we in this paper propose a unified framework to handle multiple types of masks simultaneously (e.g. strokes, object shapes, extrapolation, dense and periodic grids et al.) We address this problem by adapting a progressive learning scheme to an Semantic Aware Generative Adversarial Network (SA-PatchGAN), in order to design a mask independent network for high-quality results with the best perceptual quality. More specifically, the overall training proceeds in multiple stages so that the model gradually generate the output image from coarse to fine. In our experiments, we show that this strategy yields a large performance gain compared to the single-scale learning methods. We also introduce additional semantic conditioning to the discriminator which encourage high quality local style statistics, and show that this approach is effective on a wider scenario/tasks and could better adapt to various types of mask. Our method produces promising results on various mask types using one single model.