Towards Few-Annotation Learning for Object Detection: Are Transformer-Based Models More Efficient?

Quentin Bouniot, Angélique Loesch, Romaric Audigier, Amaury Habrard; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 75-84

Abstract


For specialized and dense downstream tasks such as object detection, labeling data requires expertise and can be very expensive, making few-shot and semi-supervised models much more attractive alternatives. While in the few-shot setup we observe that transformer-based object detectors perform better than convolution-based two-stage models for a similar amount of parameters, they are not as effective when used with recent approaches in the semi-supervised setting. In this paper, we propose a semi-supervised method tailored for the current state-of-the-art object detector Deformable DETR in the few-annotation learning setup using a student-teacher architecture, which avoids relying on a sensitive post-processing of the pseudo-labels generated by the teacher model. We evaluate our method on the semi-supervised object detection benchmarks COCO and Pascal VOC, and it outperforms previous methods, especially when annotations are scarce. We believe that our contributions open new possibilities to adapt similar object detection methods in this setup as well.

Related Material


[pdf]
[bibtex]
@InProceedings{Bouniot_2023_WACV, author = {Bouniot, Quentin and Loesch, Ang\'elique and Audigier, Romaric and Habrard, Amaury}, title = {Towards Few-Annotation Learning for Object Detection: Are Transformer-Based Models More Efficient?}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {75-84} }