Locating Objects Without Bounding Boxes

Javier Ribera, David Guera, Yuhao Chen, Edward J. Delp; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6479-6489


Recent advances in convolutional neural networks (CNN) have achieved remarkable results in locating objects in images. In these networks, the training procedure usually requires providing bounding boxes or the maximum number of expected objects. In this paper, we address the task of estimating object locations without annotated bounding boxes which are typically hand-drawn and time consuming to label. We propose a loss function that can be used in any fully convolutional network (FCN) to estimate object locations. This loss function is a modification of the average Hausdorff distance between two unordered sets of points. The proposed method has no notion of bounding boxes, region proposals, or sliding windows. We evaluate our method with three datasets designed to locate people's heads, pupil centers and plant centers. We outperform state-of-the-art generic object detectors and methods fine-tuned for pupil tracking.

Related Material

[pdf] [supp]
author = {Ribera, Javier and Guera, David and Chen, Yuhao and Delp, Edward J.},
title = {Locating Objects Without Bounding Boxes},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}