-
[pdf]
[arXiv]
[bibtex]@InProceedings{Yang_2024_ACCV, author = {Yang, Jinze and Wang, Haoran and Zhu, Zining and Liu, Chenglong and Wu, Meng and Sun, Mingming}, title = {VIP: Versatile Image Outpainting Empowered by Multimodal Large Language Model}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {1082-1099} }
VIP: Versatile Image Outpainting Empowered by Multimodal Large Language Model
Abstract
In this paper, we focus on resolving the problem of image outpainting, which aims to extrapolate the surrounding parts given the center contents of an image. Although recent works have achieved promising performance, the lack of versatility and customization hinders their practical applications in broader scenarios. Therefore, this work presents a novel image outpainting framework that is capable of customizing the results according to the requirements of users. First of all, we take advantage of a Multimodal Large Language Model (MLLM) that automatically extracts and organizes the corresponding textual descriptions of the masked and unmasked part of a given image. Accordingly, the obtained text prompts are introduced to endow our model with the capacity to customize the outpainting results. In addition, a special Center-Total-Surrounding (C-T-S) decoupled control mechanism is elaborately designed to boost text-driven generation by enhancing the interaction between specific spatial regions of the image and corresponding parts of the text prompts. Note that unlike most existing methods, our approach is very resource-efficient since it is just slightly fine-tuned on the off-the-shelf stable diffusion (SD) model rather than being trained from scratch. Finally, the experimental results on three commonly used datasets, i.e. Scenery, Building, and WikiArt, demonstrate our model significantly surpasses the SoTA methods. Moreover, versatile outpainting results are listed to show its customized ability.
Related Material