Leveraging Long-Range Temporal Relationships Between Proposals for Video Object Detection

Mykhailo Shvets, Wei Liu, Alexander C. Berg; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9756-9764

Abstract


Single-frame object detectors perform well on videos sometimes, even without temporal context. However, challenges such as occlusion, motion blur, and rare poses of objects are hard to resolve without temporal awareness. Thus, there is a strong need to improve video object detection by considering long-range temporal dependencies. In this paper, we present a light-weight modification to a single-frame detector that accounts for arbitrary long dependencies in a video. It improves the accuracy of a single-frame detector significantly with negligible compute overhead. The key component of our approach is a novel temporal relation module, operating on object proposals, that learns the similarities between proposals from different frames and selects proposals from past and/or future to support current proposals. Our final "causal" model, without any offline post-processing steps, runs at a similar speed as a single-frame detector and achieves state-of-the-art video object detection on ImageNet VID dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Shvets_2019_ICCV,
author = {Shvets, Mykhailo and Liu, Wei and Berg, Alexander C.},
title = {Leveraging Long-Range Temporal Relationships Between Proposals for Video Object Detection},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}