OMG-Seg: Is One Model Good Enough For All Segmentation?

Xiangtai Li, Haobo Yuan, Wei Li, Henghui Ding, Size Wu, Wenwei Zhang, Yining Li, Kai Chen, Chen Change Loy; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 27948-27959

Abstract


In this work we address various segmentation tasks each traditionally tackled by distinct or partially unified models. We propose OMG-Seg One Model that is Good enough to efficiently and effectively handle all the segmentation tasks including image semantic instance and panoptic segmentation as well as their video counterparts open vocabulary settings prompt-driven interactive segmentation like SAM and video object segmentation. To our knowledge this is the first model to handle all these tasks in one model and achieve satisfactory performance. We show that OMG-Seg a transformer-based encoder-decoder architecture with task-specific queries and outputs can support over ten distinct segmentation tasks and yet significantly reduce computational and parameter overhead across various tasks and datasets. We rigorously evaluate the inter-task influences and correlations during co-training. Code and models are available at https://github.com/lxtGH/OMG-Seg.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Li_2024_CVPR, author = {Li, Xiangtai and Yuan, Haobo and Li, Wei and Ding, Henghui and Wu, Size and Zhang, Wenwei and Li, Yining and Chen, Kai and Loy, Chen Change}, title = {OMG-Seg: Is One Model Good Enough For All Segmentation?}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {27948-27959} }