Unsupervised Hard Example Mining from Videos for Improved Object Detection
SouYoung Jin, Aruni RoyChowdhury, Huaizu Jiang, Ashish Singh, Aditya Prasad, Deep Chakraborty, Erik Learned-Miller; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 307-324
Abstract
Important gains have recently been obtained in object detection by using training objectives that focus on {em hard negative} examples, i.e., negative examples that are currently rated as positive or ambiguous by the detector. These examples can strongly influence parameters when the network is trained to correct them. Unfortunately, they are often sparse in the training data, and are expensive to obtain. In this work, we show how large numbers of hard negatives can be obtained {em automatically} by analyzing the output of a trained detector on video sequences. In particular, detections that are {em isolated in time}, i.e., that have no associated preceding or following detections, are likely to be hard negatives. We describe simple procedures for mining large numbers of such hard negatives (and also hard {em positives}) from unlabeled video data. Our experiments show that retraining detectors on these automatically obtained examples often significantly improves performance. We present experiments on multiple architectures and multiple data sets, including face detection, pedestrian detection and other object categories.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Jin_2018_ECCV,
author = {Jin, SouYoung and RoyChowdhury, Aruni and Jiang, Huaizu and Singh, Ashish and Prasad, Aditya and Chakraborty, Deep and Learned-Miller, Erik},
title = {Unsupervised Hard Example Mining from Videos for Improved Object Detection},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}