-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zhang_2021_CVPR, author = {Zhang, Ziqi and Qi, Zhongang and Yuan, Chunfeng and Shan, Ying and Li, Bing and Deng, Ying and Hu, Weiming}, title = {Open-Book Video Captioning With Retrieve-Copy-Generate Network}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {9837-9846} }
Open-Book Video Captioning With Retrieve-Copy-Generate Network
Abstract
In this paper, we convert traditional video captioning task into a new paradigm, i.e., Open-book Video Captioning, which generates natural language under the prompts of video-content-relevant sentences, not limited to the video itself. To address the open-book video captioning problem, we propose a novel Retrieve-Copy-Generate network, where a pluggable video-to-text retriever is leveraged to effectively retrieve sentences as hints from the training corpus, and a copy-mechanism generator is introduced to dynamically extract expressions from multi-retrievals. The two modules can be trained end-to-end or separately which is flexible and extensible. Our framework coordinates the conventional retrieval based methods with orthodox encoder-decoder methods, which can not only draw on the diverse expressions in the retrieved sentences but also generate natural and accurate content of the video. Extensive experiments on several benchmark datasets show that our proposed approach performs better than state-of-the-art approaches, indicating the effectiveness and promising of the proposed paradigm in the task of video captioning.
Related Material