Exposing and Mitigating Spurious Correlations for Cross-Modal Retrieval

Jae Myung Kim, A. Sophia Koepke, Cordelia Schmid, Zeynep Akata; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 2585-2595

Abstract


Cross-modal retrieval methods are the preferred tool to search databases for the text that best matches a query image and vice versa However, image-text retrieval models commonly learn to memorize spurious correlations in the training data, such as frequent object co-occurrence, instead of looking at the real underlying reasons of the prediction in the image. For image-text retrieval, this manifests in retrieved sentences that mention objects that are not present in the query image. In this work, we introduce ODmAP@k, an object decorrelation metric that measures a model's robustness to spurious correlations in the training data. We use automatic image and text manipulations to control the presence of such object correlations in designated test data. Additionally, our data synthesis technique is used to tackle model biases due to spurious correlations of semantically unrelated objects in the training data. We apply our proposed pipeline, which involves the finetuning of image-text retrieval frameworks on carefully designed synthetic data, to three state-of-the-art models for image-text retrieval. This results in significant improvements for all three models, both in terms of the standard retrieval performance and in terms of our object decorrelation metric. The code is available at https://github.com/ExplainableML/Spurious_CM_Retrieval.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kim_2023_CVPR, author = {Kim, Jae Myung and Koepke, A. Sophia and Schmid, Cordelia and Akata, Zeynep}, title = {Exposing and Mitigating Spurious Correlations for Cross-Modal Retrieval}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2585-2595} }