Context-Aware Group Captioning via Self-Attention and Contrastive Features

Zhuowan Li, Quan Tran, Long Mai, Zhe Lin, Alan L. Yuille; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3440-3450

Abstract


While image captioning has progressed rapidly, existing works focus mainly on describing single images. In this paper, we introduce a new task, context-aware group captioning, which aims to describe a group of target images in the context of another group of related reference images. Context-aware group captioning requires not only summarizing information from both the target and reference image group but also contrasting between them. To solve this problem, we propose a framework combining self-attention mechanism with contrastive feature construction to effectively summarize common information from each image group while capturing discriminative information between them. To build the dataset for this task, we propose to group the images and generate the group captions based on single image captions using scene graphs matching. Our datasets are constructed on top of the public Conceptual Captions dataset and our new Stock Captions dataset. Experiments on the two datasets show the effectiveness of our method on this new task.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Li_2020_CVPR,
author = {Li, Zhuowan and Tran, Quan and Mai, Long and Lin, Zhe and Yuille, Alan L.},
title = {Context-Aware Group Captioning via Self-Attention and Contrastive Features},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}