Compositional Image-Text Matching and Retrieval by Grounding Entities

Madhukar Reddy Vongala, Saurabh Srivastava, Jana Kosecka; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2025, pp. 241-250

Abstract


Vision-language pretraining on large datasets of images-text pairs is one of the main building blocks of current Vision-Language Models. While with additional training, these models excel in various downstream tasks, including visual question answering, image captioning, and visual commonsense reasoning. However, a notable weakness of pretrained models like CLIP, is their inability to perform entity grounding and compositional image and text matching. In this work we propose a novel learning-free zero-shot augmentation of CLIP embeddings that has favorable compositional properties. We compute separate embeddings of sub-images of object entities and relations that are localized by the state of the art open vocabulary detectors and dynamically adjust the baseline global image embedding. The resulting embedding is then utilized for similarity computation with text embedding, resulting in a average 1.5% improvement in image-text matching accuracy on the Visual Genome and SVO Probes datasets. Notably, the enhanced embeddings demonstrate superior retrieval performance, thus achieving significant gains on the Flickr30K and MS-COCO retrieval benchmarks, improving the state-of-the-art Recall@1 by 12% and 0.4%, respectively.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Vongala_2025_CVPR, author = {Vongala, Madhukar Reddy and Srivastava, Saurabh and Kosecka, Jana}, title = {Compositional Image-Text Matching and Retrieval by Grounding Entities}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2025}, pages = {241-250} }