Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning

Jyoti Aneja, Harsh Agrawal, Dhruv Batra, Alexander Schwing; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4261-4270

Abstract


Diverse and accurate vision+language modeling is an important goal to retain creative freedom and maintain user engagement. However, adequately capturing the intricacies of diversity in language models is challenging. Recent works commonly resort to latent variable models augmented with more or less supervision from object detectors or part-of-speech tags. In common to all those methods is the fact that the latent variable either only initializes the sentence generation process or is identical across the steps of generation. Both methods offer no fine-grained control. To address this concern, we propose Seq-CVAE which learns a latent space for every word. We encourage this temporal latent space to capture the 'intention' about how to complete the sentence by mimicking a representation which summarizes the future. We illustrate the efficacy of the proposed approach on the challenging MSCOCO dataset, significantly improving diversity metrics compared to baselines while performing on par w.r.t. sentence quality.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Aneja_2019_ICCV,
author = {Aneja, Jyoti and Agrawal, Harsh and Batra, Dhruv and Schwing, Alexander},
title = {Sequential Latent Spaces for Modeling the Intention During Diverse Image Captioning},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}