Investigating Mechanisms for In-Context Vision Language Binding

Darshana Saravanan, Makarand Tapaswi, Vineet Gandhi; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 4852-4856

Abstract


To understand a prompt, Vision-Language models (VLMs) must perceive the image, comprehend the text, and build associations within and across both modalities. For instance, given an 'image of a red toy car', the model should associate this image to phrases like 'car', 'red toy', 'red object', etc. Feng and Steinhardt propose the Binding ID mechanism in LLMs, suggesting that the entity and its corresponding attribute tokens share a Binding ID in the model activations. We investigate this for image-text binding in VLMs using a synthetic dataset and task that requires models to associate 3D objects in an image with their descriptions in the text. Our experiments demonstrate that VLMs assign a distinct Binding ID to an object's image tokens and its textual references, enabling in-context association.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Saravanan_2025_CVPR, author = {Saravanan, Darshana and Tapaswi, Makarand and Gandhi, Vineet}, title = {Investigating Mechanisms for In-Context Vision Language Binding}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {4852-4856} }