Toward Scale-Invariance and Position-Sensitive Region Proposal Networks

Hsueh-Fu Lu, Xiaofei Du, Ping-Lin Chang; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 168-183

Abstract


Accurately localising object proposals is an important precondition for high detection rate for the state-of-the-art object detection frameworks. The accuracy of an object detection method has been shown highly related to the average recall (AR) of the proposals. In this work, we propose an advanced object proposal network in favour of translation-invariance for objectness classification, translation-variance for bounding box regression, large effective receptive fields for capturing global context and scale-invariance for dealing with a range of object sizes from extremely small to large. The design of the network architecture aims to be simple while being effective and with real-time performance. Without bells and whistles the proposed object proposal network significantly improves AR at 1,000 proposals by 35% and 45% on PASCAL VOC and COCO dataset respectively and has a fast inference time of 44.8 ms for input image size of 640x640. Empirical studies have also shown that the proposed method is class-agnostic to be generalised for general object proposal.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Lu_2018_ECCV,
author = {Lu, Hsueh-Fu and Du, Xiaofei and Chang, Ping-Lin},
title = {Toward Scale-Invariance and Position-Sensitive Region Proposal Networks},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}