Improving Occlusion and Hard Negative Handling for Single-Stage Pedestrian Detectors

Junhyug Noh, Soochan Lee, Beomsu Kim, Gunhee Kim; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 966-974

Abstract


We propose methods of addressing two critical issues of pedestrian detection: (i) occlusion of target objects as false negative failure, and (ii) confusion with hard negative examples like vertical structures as false positive failure. Our solutions to these two problems are general and flexible enough to be applicable to any single-stage detection models. We implement our methods into four state-of-the-art single-stage models, including SqueezeDet+, YOLOv2, SSD, and DSSD. We empirically validate that our approach indeed improves the performance of those four models on Caltech pedestrian and CityPersons dataset. Moreover, in some heavy occlusion settings, our approach achieves the best reported performance. Specifically, our two solutions are as follows. For better occlusion handling, we update the output tensors of single-stage models so that they include the prediction of part confidence scores, from which we compute a final occlusion-aware detection score. For reducing confusion with hard negative examples, we introduce average grid classifiers as post-refinement classifiers, trainable in an end-to-end fashion with little memory and time overhead (e.g. increase of 1--5 MB in memory and 1--2 ms in inference time).

Related Material


[pdf] [Supp]
[bibtex]
@InProceedings{Noh_2018_CVPR,
author = {Noh, Junhyug and Lee, Soochan and Kim, Beomsu and Kim, Gunhee},
title = {Improving Occlusion and Hard Negative Handling for Single-Stage Pedestrian Detectors},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}