Image-to-Image Matching via Foundation Models: A New Perspective for Open-Vocabulary Semantic Segmentation

Yuan Wang, Rui Sun, Naisong Luo, Yuwen Pan, Tianzhu Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 3952-3963

Abstract


Open-vocabulary semantic segmentation (OVS) aims to segment images of arbitrary categories specified by class labels or captions. However most previous best-performing methods whether pixel grouping methods or region recognition methods suffer from false matches between image features and category labels. We attribute this to the natural gap between the textual features and visual features. In this work we rethink how to mitigate false matches from the perspective of image-to-image matching and propose a novel relation-aware intra-modal matching (RIM) framework for OVS based on visual foundation models. RIM achieves robust region classification by firstly constructing diverse image-modal reference features and then matching them with region features based on relation-aware ranking distribution. The proposed RIM enjoys several merits. First the intra-modal reference features are better aligned circumventing potential ambiguities that may arise in cross-modal matching. Second the ranking-based matching process harnesses the structure information implicit in the inter-class relationships making it more robust than comparing individually. Extensive experiments on three benchmarks demonstrate that RIM outperforms previous state-of-the-art methods by large margins obtaining a lead of more than 10% in mIoU on PASCAL VOC benchmark

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Yuan and Sun, Rui and Luo, Naisong and Pan, Yuwen and Zhang, Tianzhu}, title = {Image-to-Image Matching via Foundation Models: A New Perspective for Open-Vocabulary Semantic Segmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {3952-3963} }