Adaptive Adversarial Network for Source-Free Domain Adaptation

Haifeng Xia, Handong Zhao, Zhengming Ding; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9010-9019

Abstract


Unsupervised Domain Adaptation solves knowledge transfer along with the coexistence of well-annotated source domain and unlabeled target instances. However, the source domain in many practical applications is not always accessible due to data privacy or the insufficient memory storage for small devices. This scenario defined as Source-free Domain Adaptation only allows accessing the well-trained source model for target learning. To address the challenge of source data unavailability, we develop an Adaptive Adversarial Network (A2Net) including three components. Specifically, the first one named Adaptive Adversarial Inference seeks a target-specific classifier to advance the recognition of samples which the provided source-specific classifier difficultly identifies. Then, the Contrastive Category-wise Matching module exploits the positive relation of every two target images to enforce the compactness of subspace for each category. Thirdly, Self-Supervised Rotation facilitates the model to learn additional semantics from target images by themselves. Extensive experiments on the popular cross-domain benchmarks verify the effectiveness of our proposed model on solving adaptation task without any source data.

Related Material


[pdf]
[bibtex]
@InProceedings{Xia_2021_ICCV, author = {Xia, Haifeng and Zhao, Handong and Ding, Zhengming}, title = {Adaptive Adversarial Network for Source-Free Domain Adaptation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {9010-9019} }