-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Guo_2024_CVPR, author = {Guo, He and Ye, Zixuan and Cao, Zhiguo and Lu, Hao}, title = {In-Context Matting}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {3711-3720} }
In-Context Matting
Abstract
We introduce in-context matting a novel task setting of image matting. Given a reference image of a certain foreground and guided priors such as points scribbles and masks in-context matting enables automatic alpha estimation on a batch of target images of the same foreground category without additional auxiliary input. This setting marries good performance in auxiliary input-based matting and ease of use in automatic matting which finds a good trade-off between customization and automation. To overcome the key challenge of accurate foreground matching we introduce IconMatting an in-context matting model built upon a pre-trained text-to-image diffusion model. Conditioned on inter- and intra-similarity matching IconMatting can make full use of reference context to generate accurate target alpha mattes. To benchmark the task we also introduce a novel testing dataset ICM-57 covering 57 groups of real-world images. Quantitative and qualitative results on the ICM-57 testing set show that IconMatting rivals the accuracy of trimap-based matting while retaining the automation level akin to automatic matting. Code is available at https://github.com/tiny-smart/in-context-matting.
Related Material