Confidence-based Visual Dispersal for Few-shot Unsupervised Domain Adaptation

Yizhe Xiong, Hui Chen, Zijia Lin, Sicheng Zhao, Guiguang Ding; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 11621-11631


Unsupervised domain adaptation aims to transfer knowledge from a fully-labeled source domain to an unlabeled target domain. However, in real-world scenarios, providing abundant labeled data even in the source domain can be infeasible due to the difficulty and high expense of annotation. To address this issue, recent works consider the Few-shot Unsupervised Domain Adaptation (FUDA) where only a few source samples are labeled, and conduct knowledge transfer via self-supervised learning methods. Yet existing methods generally overlook that the sparse label setting hinders learning reliable source knowledge for transfer. Additionally, the learning difficulty difference in target samples is different but ignored, leaving hard target samples poorly classified. To tackle both deficiencies, in this paper, we propose a novel Confidence-based Visual Dispersal Transfer learning method (C-VisDiT) for FUDA. Specifically, C-VisDiT consists of a cross-domain visual dispersal strategy that transfers only high-confidence source knowledge for model adaptation and an intra-domain visual dispersal strategy that guides the learning of hard target samples with easy ones. We conduct extensive experiments on Office-31, Office-Home, VisDA-C, and DomainNet benchmark datasets and the results demonstrate that the proposed C-VisDiT significantly outperforms state-of-the-art FUDA methods. Our code is available at

Related Material

[pdf] [supp]
@InProceedings{Xiong_2023_ICCV, author = {Xiong, Yizhe and Chen, Hui and Lin, Zijia and Zhao, Sicheng and Ding, Guiguang}, title = {Confidence-based Visual Dispersal for Few-shot Unsupervised Domain Adaptation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {11621-11631} }