Disentangle Source and Target Knowledge for Continual Test-Time Adaptation

Tianyi Ma, Maoying Qiao; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 8013-8023

Abstract


Continual Test-Time Adaptation (CoTTA) task is proposed to tackle the challenges of constant domain shifts during testing. The goals are twofold: 1) to preserve the knowledge from the source domain without source data and 2) to effectively extract target knowledge using unlabeled target domain data. Existing works primarily focus on either source or target knowledge attempting to learn both in a mixed manner. We argue that this may harm the source knowledge preservation and target knowledge extraction. To this end this paper proposes a Source and Target knowledge Disentangle Transformer (SoTa-DiT) with the prompting mechanism. Specifically in a vision transformer (ViT) we incorporate source and target prompts supervised by two groups of deliberately designed loss functions to learn source and target knowledge separately. The source prompt focuses on anti-source-forgetting by extracting and preserving knowledge from the source model while the target prompt focuses on pro-target-extracting using target data contrastive learning. With comprehensive evaluations across various datasets using different ViT backbones we demonstrate that this dual-prompt architecture of SoTa-DiT is effective and that disentangling knowledge with the prompts benefits CoTTA. As a result SoTa-DiT significantly improves image classification accuracy under the CoTTA setting.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ma_2025_WACV, author = {Ma, Tianyi and Qiao, Maoying}, title = {Disentangle Source and Target Knowledge for Continual Test-Time Adaptation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {8013-8023} }