SC-UDA: Style and Content Gaps Aware Unsupervised Domain Adaptation for Object Detection

Fuxun Yu, Di Wang, Yinpeng Chen, Nikolaos Karianakis, Tong Shen, Pei Yu, Dimitrios Lymberopoulos, Sidi Lu, Weisong Shi, Xiang Chen; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 382-391

Abstract


Current state-of-the-art object detectors can have a significant performance drop when deployed in the wild due to domain gaps with training data. Unsupervised Domain Adaptation (UDA) is a promising approach to adapt detectors for new domains/environments without any expensive label cost. Previous mainstream UDA works for object detection usually focused on image-level and/or feature-level adaptation by using adversarial learning methods. In this work, we show that such adversarial-based methods can only reduce the domain style gap, but cannot address the domain content gap that is also important for object detectors. To overcome this limitation, we propose the SC-UDA framework to concurrently reduce both gaps: We propose fine-grained domain style transfer to reduce the style gaps with finer image details preserved for detecting small objects; Then we leverage the pseudo-label-based self-training to reduce content gaps; To address pseudo label error accumulation during self-training, novel optimizations are proposed, including uncertainty-based pseudo labeling and imbalanced mini-batch sampling strategy. Experiment results show that our approach consistently outperforms prior stat-of-the-art methods (up to 8.6%, 2.7%, and 2.5% mAP on three UDA benchmarks).

Related Material


[pdf]
[bibtex]
@InProceedings{Yu_2022_WACV, author = {Yu, Fuxun and Wang, Di and Chen, Yinpeng and Karianakis, Nikolaos and Shen, Tong and Yu, Pei and Lymberopoulos, Dimitrios and Lu, Sidi and Shi, Weisong and Chen, Xiang}, title = {SC-UDA: Style and Content Gaps Aware Unsupervised Domain Adaptation for Object Detection}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {382-391} }