- [pdf] [supp] [arXiv]
Adaptive Self-Training for Object Detection
Deep learning has emerged as an effective solution for solving the task of object detection in images but at the cost of requiring large labeled datasets. To mitigate this cost, semi-supervised object detection methods, which consist in leveraging abundant unlabeled data, have been proposed and have already shown impressive results. These methods however often rely on a thresholding mechanism to allocate pseudo-labels. This threshold value is usually determined empirically for a dataset, which is time consuming and requires a new and costly parameter search when the domain changes. In this work, we introduce a new teacher-student method, named Adaptive Self-Training for Object Detection (ASTOD), which is simple and effective. ASTOD selects pseudo-labels adaptively by examining the score histogram. In addition, we also introduce the idea to systematically refine the student, after training, with the labeled data only to improve its performance. While the teacher and the student of ASTOD are trained separately, in the end, the refined student replaces the teacher in an iterative fashion. Our experiments show that, on the MS-COCO dataset, our method consistently outperforms other adaptive state-of-the-art methods, and performs equally with respect to methods that require a manual parameter sweep search, and are therefore of limited use in practice. Additional experiments with respect to a supervised baseline on the DIOR dataset containing satellite images lead to similar conclusions, and prove that it is possible to adapt the score threshold automatically in self-training, regardless of the data distribution. The code is available at https:// github.com/rvandeghen/ASTOD.