SuS-X: Training-Free Name-Only Transfer of Vision-Language Models

Vishaal Udandarao, Ankush Gupta, Samuel Albanie; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 2725-2736

Abstract


Contrastive Language-Image Pre-training (CLIP) has emerged as a simple yet effective way to train large-scale vision-language models. CLIP demonstrates impressive zero-shot classification and retrieval performance on diverse downstream tasks. However, to leverage its full potential, fine-tuning still appears to be necessary. Fine-tuning the entire CLIP model can be resource-intensive and unstable. Moreover, recent methods that aim to circumvent this need for fine-tuning still require access to images from the target distribution. In this paper, we pursue a different approach and explore the regime of training-free "name-only transfer" in which the only knowledge we possess about downstream tasks comprises the names of downstream target categories. We propose a novel method, SuS-X, consisting of two key building blocks--"SuS" and "TIP-X", that requires neither intensive fine-tuning nor costly labelled data. SuS-X achieves state-of-the-art (SoTA) zero-shot classification results on 19 benchmark datasets. We further show the utility of TIP-X in the training-free few-shot setting, where we again achieve SoTA results over strong training-free baselines.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Udandarao_2023_ICCV, author = {Udandarao, Vishaal and Gupta, Ankush and Albanie, Samuel}, title = {SuS-X: Training-Free Name-Only Transfer of Vision-Language Models}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {2725-2736} }