-
[pdf]
[supp]
[bibtex]@InProceedings{Kim_2024_ACCV, author = {Kim, Taehoon and Na, Jaemin and Hwang, Joong-won and Chang, Hyung Jin and Hwang, Wonjun}, title = {Dual Prototype-driven Objectness Decoupling for Cross-Domain Object Detection in Urban Scene}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {1148-1165} }
Dual Prototype-driven Objectness Decoupling for Cross-Domain Object Detection in Urban Scene
Abstract
Unsupervised domain adaptation aims to mitigate the domain gap between the source and the target domains. Despite domain shifts, we have observed intrinsic knowledge that spans across domains for object detection in urban driving scenes. First, it includes consistent characteristics of objects within the same category of extracted ROIs. Second, it encompasses the similarity of patterns within the extracted ROIs, relating to the positions of the foreground and background during object detection. To utilize these, we present DuPDA, a method that effectively adapts object detectors to target domains by leveraging domain-invariant knowledge to separable objectness for training. Specifically, we construct categorical and regional prototypes, each of which operates through their specialized moving alignments. These prototypes serve as valuable references for training unlabeled target objects using similarity. Leveraging these prototypes, we determine and utilize a boundary that trains separately the foreground and background regions within the target ROIs, thereby transferring the knowledge to focus on each respective region. Our DuPDA surpasses previous state-of-the-art methods in various evaluation protocols on six benchmarks.
Related Material