Multi-Source Domain Adaptation for Object Detection

Xingxu Yao, Sicheng Zhao, Pengfei Xu, Jufeng Yang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 3273-3282

Abstract


To reduce annotation labor associated with object detection, an increasing number of studies focus on transferring the learned knowledge from a labeled source domain to another unlabeled target domain. However, existing methods assume that the labeled data are sampled from a single source domain, which ignores a more generalized scenario, where labeled data are from multiple source domains. For the more challenging task, we propose a unified Faster RCNN based framework, termed Divide-and-Merge Spindle Network (DMSN), which can simultaneously enhance domain invariance and preserve discriminative power. Specifically, the framework contains multiple source subnets and a pseudo target subnet. First, we propose a hierarchical feature alignment strategy to conduct strong and weak alignments for low- and high-level features, respectively, considering their different effects for object detection. Second, we develop a novel pseudo subnet learning algorithm to approximate optimal parameters of pseudo target subset by weighted combination of parameters in different source subnets. Finally, a consistency regularization for region proposal network is proposed to facilitate each subnet to learn more abstract invariances. Extensive experiments on different adaptation scenarios demonstrate the effectiveness of the proposed model.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Yao_2021_ICCV, author = {Yao, Xingxu and Zhao, Sicheng and Xu, Pengfei and Yang, Jufeng}, title = {Multi-Source Domain Adaptation for Object Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {3273-3282} }