Mobile Video Object Detection With Temporally-Aware Feature Maps

Mason Liu, Menglong Zhu; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5686-5695

Abstract


This paper introduces an online model for object detection in videos with real-time performance on mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Liu_2018_CVPR,
author = {Liu, Mason and Zhu, Menglong},
title = {Mobile Video Object Detection With Temporally-Aware Feature Maps},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}