-
[pdf]
[supp]
[bibtex]@InProceedings{Lee_2024_ACCV, author = {Lee, JoonHo and Lee, Gyemin}, title = {Adapting Models to Scarce Target Data without Source Samples}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {1618-1633} }
Adapting Models to Scarce Target Data without Source Samples
Abstract
When significant discrepancies exist in data distributions between source and target domains, source-trained models often exhibit suboptimal performance in the target domain. Unsupervised domain adaptation (UDA) effectively addresses this issue without needing labels of target data. More recent source-free UDA methods handle the situations where source data is inaccessible. However, the performance of UDA is substantially compromised when the target domain data is scarce. Despite the challenges in obtaining and storing large target data, this aspect of UDA has not been extensively investigated. Our study introduces a new method to alleviate performance degradation in source-free UDA under target data scarcity. The proposed method retains the architecture and pretrained parameters of the source model, thereby reducing the risk of overfitting. Instead, it incorporates less than 3.3% of trainable parameters that comprise a set of convolution layers with non-linearity and a spatial attention network. Empirical assessments reveal that our approach achieves up to a 5.4% performance improvement with limited target data on VisDA benchmark over existing UDA methods. Similar trends are also evident in Office-31 benchmark and multi-source UDA experiments with Office-Home benchmark across different target domains. Our method shows promising enhancement of the adapted model's generalization. These findings highlight the efficacy of our method in improving UDA across diverse domain adaptation scenarios.
Related Material