-
[pdf]
[arXiv]
[bibtex]@InProceedings{Li_2024_CVPR, author = {Li, Jiachen and Jain, Jitesh and Shi, Humphrey}, title = {Matting Anything}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1775-1785} }
Matting Anything
Abstract
In this paper we propose the Matting Anything Model (MAM) an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance. MAM offers several significant advantages over previous specialized image matting networks: (i) MAM is capable of dealing with various types of image matting including semantic instance and referring image matting with only a single model; (ii) MAM leverages the feature maps from the Segment Anything Model (SAM) and adopts a lightweight Mask-to-Matte (M2M) module to predict the alpha matte through iterative refinement which has only 2.7 million trainable parameters. (iii) By incorporating SAM MAM simplifies the user intervention required for the interactive use of image matting from the trimap to the box point or text prompt. We evaluate the performance of MAM on various image matting benchmarks and the experimental results demonstrate that MAM achieves comparable performance to the state-of-the-art specialized image matting models under different metrics on each benchmark. Overall MAM shows superior generalization ability and can effectively handle various image matting tasks with fewer parameters making it a practical solution for unified image matting.
Related Material