NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection

Golnaz Ghiasi, Tsung-Yi Lin, Quoc V. Le; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7036-7045

Abstract


Current state-of-the-art convolutional architectures for object detection are manually designed. Here we aim to learn a better architecture of feature pyramid network for object detection. We adopt Neural Architecture Search and discover a new feature pyramid architecture in a novel scalable search space covering all cross-scale connections. The discovered architecture, named NAS-FPN, consists of a combination of top-down and bottom-up connections to fuse features across scales. NAS-FPN, combined with various backbone models in the RetinaNet framework, achieves better accuracy and latency tradeoff compared to state-of-the-art object detection models. NAS-FPN improves mobile detection accuracy by 2 AP compared to state-of-the-art SSDLite with MobileNetV2 model in [32] and achieves 48.3 AP which surpasses Mask R-CNN [10] detection accuracy with less computation time.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ghiasi_2019_CVPR,
author = {Ghiasi, Golnaz and Lin, Tsung-Yi and Le, Quoc V.},
title = {NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}