-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Khandelwal_2024_CVPR, author = {Khandelwal, Anant}, title = {PromptSync: Bridging Domain Gaps in Vision-Language Models through Class-Aware Prototype Alignment and Discrimination}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7819-7828} }
PromptSync: Bridging Domain Gaps in Vision-Language Models through Class-Aware Prototype Alignment and Discrimination
Abstract
The potential for zero-shot generalization in vision-language (V-L) models such as CLIP has spurred their widespread adoption in addressing numerous downstream tasks. Previous methods have employed test-time prompt tuning to adapt the model to unseen domains but they overlooked the issue of imbalanced class distributions. In this study we explicitly address this problem by employing class-aware prototype alignment weighted by mean class probabilities obtained for the test sample and filtered augmented views. Additionally we ensure that the class probabilities are as accurate as possible by performing prototype discrimination using contrastive learning. The combination of alignment and discriminative loss serves as a geometric regularizer preventing the prompt representation from collapsing onto a single class and effectively bridging the distribution gap between the source and test domains. Our method named PromptSync synchronizes the prompts for each test sample on both the text and vision branches of the V-L model. In empirical evaluations on the domain generalization benchmark our method outperforms previous best methods by 2.33% in overall performance by 1% in base-to-novel generalization and by 2.84% in cross-dataset transfer tasks.
Related Material