Self-Supervised Object Detection and Retrieval Using Unlabeled Videos

Elad Amrani, Rami Ben-Ari, Inbar Shapira, Tal Hakim, Alex Bronstein; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 954-955

Abstract


Learning an object detection or retrieval system requires a large data set with manual annotations. Such data are expensive and time-consuming to create and therefore difficult to obtain on a large scale. In this work, we propose using the natural correlation in narrations and the visual presence of objects in video to learn an object detector and retriever without any manual labeling involved. We pose the problem as weakly supervised learning with noisy labels, and propose a novel object detection and retrieval paradigm under these constraints. We handle the background rejection by using contrastive samples and confront the high level of label noise with a new clustering score. Our evaluation is based on a set of ten objects with manual ground truth annotation in almost 5000 frames extracted from instructional videos from the web. We demonstrate superior results compared to state-of-the-art weakly-supervised approaches and report a strongly-labeled upper bound as well. While the focus of the paper is object detection and retrieval, the proposed methodology can be applied to a broader range of noisy weakly-supervised problems.

Related Material


[pdf]
[bibtex]
@InProceedings{Amrani_2020_CVPR_Workshops,
author = {Amrani, Elad and Ben-Ari, Rami and Shapira, Inbar and Hakim, Tal and Bronstein, Alex},
title = {Self-Supervised Object Detection and Retrieval Using Unlabeled Videos},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}