-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Han_2024_CVPR, author = {Han, Jinwei and Lin, Zhiwen and Sun, Zhongyisun and Gao, Yingguo and Yan, Ke and Ding, Shouhong and Gao, Yuan and Xia, Gui-Song}, title = {Anchor-based Robust Finetuning of Vision-Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {26919-26928} }
Anchor-based Robust Finetuning of Vision-Language Models
Abstract
We aim at finetuning a vision-language model without hurting its out-of-distribution (OOD) generalization. We address two types of OOD generalization i.e. i) domain shift such as natural to sketch images and ii) zero-shot capability to recognize the category that was not contained in the finetune data. Arguably the diminished OOD generalization after finetuning stems from the excessively simplified finetuning target which only provides the class information such as "a photo of a [CLASS]". This is distinct from the process in that CLIP was pretrained where there is abundant text supervision with rich semantic information. Therefore we propose to compensate for the finetune process using auxiliary supervision with rich semantic information which acts as anchors to preserve the OOD generalization. Specifically two types of anchors are elaborated in our methods including i) text-compensated anchor which uses the images from the finetune set but enriches the text supervision from a pretrained captioner ii) image-text-pair anchor which is retrieved from the dataset similar to pretraining data of CLIP according to the downstream task associating with the original CLIP text with rich semantics. Those anchors are utilized as auxiliary semantic information to maintain the original feature space of CLIP thereby preserving the OOD generalization capabilities. Comprehensive experiments demonstrate that our method achieves in-distribution performance akin to conventional finetuning while attaining new state-of-the-art results on domain shift and zero-shot learning benchmarks.
Related Material