StyleNet: Generating Attractive Visual Captions With Styles

Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, Li Deng; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3137-3146

Abstract


We propose a novel framework named StyleNet to address the task of generating attractive captions for images and videos with different styles. To this end, we devise a novel model component, named factored LSTM, which automatically distills the style factors in the monolingual text corpus. Then at runtime, we can explicitly control the style in the caption generation process so as to produce attractive visual captions with the desired style. Our approach achieves this goal by leveraging two sets of data: 1) factual image/video-caption paired data, and 2) stylized monolingual text data (e.g., romantic and humorous sentences). We show experimentally that StyleNet outperforms existing approaches for generating visual captions with different styles, measured in both automatic and human evaluation metrics on the newly collected FlickrStyle10K image caption dataset, which contains 10K Flickr images with corresponding humorous and romantic captions.

Related Material


[pdf]
[bibtex]
@InProceedings{Gan_2017_CVPR,
author = {Gan, Chuang and Gan, Zhe and He, Xiaodong and Gao, Jianfeng and Deng, Li},
title = {StyleNet: Generating Attractive Visual Captions With Styles},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}