OT-VP: Optimal Transport-Guided Visual Prompting for Test-Time Adaptation

Yunbei Zhang, Akshay Mehra, Jihun Hamm; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 1122-1132

Abstract


Vision Transformers (ViTs) have demonstrated remarkable capabilities in learning representations but their performance is compromised when applied to unseen domains. Previous methods either engage in prompt learning during the training phase or modify model parameters at test time through entropy minimization. The former often overlooks unlabeled target data while the latter doesn't fully address domain shifts. In this work our approach Optimal Transport-guided Test-Time Visual Prompting (OT-VP) handles these problems by leveraging prompt learning at test time to align the target and source domains without accessing the training process or altering pre-trained model parameters. This method involves learning a universal visual prompt for the target domain by optimizing the Optimal Transport distance. With only four learned prompt tokens OT-VP exceeds state-of-the-art performance across three stylistic datasets--PACS VLCS OfficeHome and one corrupted dataset ImageNet-C. Additionally OT-VP operates efficiently both in terms of memory and computation and is adaptable for extension to online settings. The code is available at https://github.com/zybeich/OT-VP.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhang_2025_WACV, author = {Zhang, Yunbei and Mehra, Akshay and Hamm, Jihun}, title = {OT-VP: Optimal Transport-Guided Visual Prompting for Test-Time Adaptation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1122-1132} }