-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Xu_2024_CVPR, author = {Xu, Katherine and Zhang, Lingzhi and Shi, Jianbo}, title = {Amodal Completion via Progressive Mixed Context Diffusion}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {9099-9109} }
Amodal Completion via Progressive Mixed Context Diffusion
Abstract
Our brain can effortlessly recognize objects even when partially hidden from view. Seeing the visible of the hidden is called amodal completion; however this task remains a challenge for generative AI despite rapid progress. We propose to sidestep many of the difficulties of existing approaches which typically involve a two-step process of predicting amodal masks and then generating pixels. Our method involves thinking outside the box literally! We go outside the object bounding box to use its context to guide a pre-trained diffusion inpainting model and then progressively grow the occluded object and trim the extra background. We overcome two technical challenges: 1) how to be free of unwanted co-occurrence bias which tends to regenerate similar occluders and 2) how to judge if an amodal completion has succeeded. Our amodal completion method exhibits improved photorealistic completion results compared to existing approaches in numerous successful completion cases. And the best part? It doesn't require any special training or fine-tuning of models. Project page and code: https://k8xu.github.io/amodal/
Related Material