Streamlined Dense Video Captioning
Jonghwan Mun, Linjie Yang, Zhou Ren, Ning Xu, Bohyung Han; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6588-6597
Abstract
Dense video captioning is an extremely challenging task since accurate and coherent description of events in a video requires holistic understanding of video contents as well as contextual reasoning of individual events. Most existing approaches handle this problem by first detecting event proposals from a video and then captioning on a subset of the proposals. As a result, the generated sentences are prone to be redundant or inconsistent since they fail to consider temporal dependency between events. To tackle this challenge, we propose a novel dense video captioning framework, which models temporal dependency across events in a video explicitly and leverages visual and linguistic context from prior events for coherent storytelling. This objective is achieved by 1) integrating an event sequence generation network to select a sequence of event proposals adaptively, and 2) feeding the sequence of event proposals to our sequential video captioning network, which is trained by reinforcement learning with two-level rewards---at both event and episode levels---for better context modeling. The proposed technique achieves outstanding performances on ActivityNet Captions dataset in most metrics.
Related Material
[pdf]
[supp]
[video]
[
bibtex]
@InProceedings{Mun_2019_CVPR,
author = {Mun, Jonghwan and Yang, Linjie and Ren, Zhou and Xu, Ning and Han, Bohyung},
title = {Streamlined Dense Video Captioning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}