High-Fidelity Pluralistic Image Completion With Transformers

Ziyu Wan, Jingbo Zhang, Dongdong Chen, Jing Liao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 4692-4701

Abstract


Image completion has made tremendous progress with convolutional neural networks (CNNs), because of their powerful texture modeling capacity. However, due to some inherent properties (eg, local inductive prior, spatial-invariant kernels), CNNs do not perform well in understanding global structures or naturally support pluralistic completion. Recently, transformers demonstrate their power in modeling the long-term relationship and generating diverse results, but their computation complexity is quadratic to input length, thus hampering the application in processing high-resolution images. This paper brings the best of both worlds to pluralistic image completion: appearance prior reconstruction with transformer and texture replenishment with CNN. The former transformer recovers pluralistic coherent structures together with some coarse textures, while the latter CNN enhances the local texture details of coarse priors guided by the high-resolution masked images. The proposed method vastly outperforms state-of-the-art methods in terms of three aspects: 1) large performance boost on image fidelity even compared to deterministic completion methods; 2) better diversity and higher fidelity for pluralistic completion; 3) exceptional generalization ability on large masks and generic dataset, like ImageNet. Code and pre-trained models have been publicly released at https://github.com/raywzy/ICT.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wan_2021_ICCV, author = {Wan, Ziyu and Zhang, Jingbo and Chen, Dongdong and Liao, Jing}, title = {High-Fidelity Pluralistic Image Completion With Transformers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {4692-4701} }