Grounding Everything: Emerging Localization Properties in Vision-Language Transformers

Walid Bousselham, Felix Petersen, Vittorio Ferrari, Hilde Kuehne; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 3828-3837

Abstract


Vision-language foundation models have shown remarkable performance in various zero-shot settings such as image retrieval classification or captioning. But so far those models seem to fall behind when it comes to zero-shot localization of referential expressions and objects in images. As a result they need to be fine-tuned for this task. In this paper we show that pretrained vision-language (VL) models allow for zero-shot open-vocabulary object localization without any fine-tuning. To leverage those capabilities we propose a Grounding Everything Module (GEM) that generalizes the idea of value-value attention introduced by CLIPSurgery to a self-self attention path. We show that the concept of self-self attention corresponds to clustering thus enforcing groups of tokens arising from the same object to be similar while preserving the alignment with the language space. To further guide the group formation we propose a set of regularizations that allows the model to finally generalize across datasets and backbones. We evaluate the proposed GEM framework on various benchmark tasks and datasets for semantic segmentation. GEM not only outperforms other training-free open-vocabulary localization methods but also achieves state-of-the-art results on the recently proposed OpenImagesV7 large-scale segmentation benchmark. Code is available at https://github.com/WalBouss/GEM

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Bousselham_2024_CVPR, author = {Bousselham, Walid and Petersen, Felix and Ferrari, Vittorio and Kuehne, Hilde}, title = {Grounding Everything: Emerging Localization Properties in Vision-Language Transformers}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {3828-3837} }