Shatter and Gather: Learning Referring Image Segmentation with Text Supervision

Dongwon Kim, Namyup Kim, Cuiling Lan, Suha Kwak; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 15547-15557

Abstract


Referring image segmentation, the task of segmenting any arbitrary entities described in free-form texts, opens up a variety of vision applications. However, manual labeling of training data for this task is prohibitively costly, leading to lack of labeled data for training. We address this issue by a weakly supervised learning approach using text descriptions of training images as the only source of supervision. To this end, we first present a new model that discovers semantic entities in input image and then combines such entities relevant to text query to predict the mask of the referent. We also present a new loss function that allows the model to be trained without any further supervision. Our method was evaluated on four public benchmarks for referring image segmentation, where it clearly outperformed the existing method for the same task and recent open-vocabulary segmentation models on all the benchmarks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kim_2023_ICCV, author = {Kim, Dongwon and Kim, Namyup and Lan, Cuiling and Kwak, Suha}, title = {Shatter and Gather: Learning Referring Image Segmentation with Text Supervision}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {15547-15557} }