good4cir: Generating Detailed Synthetic Captions for Composed Image Retrieval

Pranavi Kolouju, Eric Xing, Robert Pless, Nathan Jacobs, Abby Stylianou; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 3148-3157

Abstract


Composed image retrieval (CIR) enables users to search images using a reference image combined with textual modifications. Recent advances in vision-language models have improved CIR, but dataset limitations remain a barrier. Existing datasets often rely on simplistic, ambiguous, or insufficient manual annotations, hindering fine-grained retrieval. We introduce good4cir, a structured pipeline leveraging vision-language models to generate high-quality synthetic annotations. Our method involves: (1) extracting fine-grained object descriptions from query images, (2) generating comparable descriptions for target images, and (3) synthesizing textual instructions capturing meaningful transformations between images. This reduces hallucination, enhances modification diversity, and ensures object-level consistency. Applying our method improves existing datasets and enables creating new datasets across diverse domains. Results demonstrate improved retrieval accuracy for CIR models trained on our pipeline-generated datasets. We release our dataset construction framework to support further research in CIR and multi-modal retrieval.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kolouju_2025_CVPR, author = {Kolouju, Pranavi and Xing, Eric and Pless, Robert and Jacobs, Nathan and Stylianou, Abby}, title = {good4cir: Generating Detailed Synthetic Captions for Composed Image Retrieval}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {3148-3157} }