Unsupervised Textual Grounding: Linking Words to Image Concepts
Raymond A. Yeh, Minh N. Do, Alexander G. Schwing; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6125-6134
Abstract
Textual grounding, i.e., linking words to objects in images, is a challenging but important task for robotics and human-computer interaction. Existing techniques benefit from recent progress in deep learning and generally formulate the task as a supervised learning problem, selecting a bounding box from a set of possible options. To train these deep net based approaches, access to a large-scale datasets is required, however, constructing such a dataset is time-consuming and expensive. Therefore, we develop a completely unsupervised mechanism for textual grounding using hypothesis testing as a mechanism to link words to detected image concepts. We demonstrate our approach on the ReferIt Game dataset and the Flickr30k data, outperforming baselines by 7.98% and 6.96% respectively.
Related Material
[pdf]
[arXiv]
[video]
[
bibtex]
@InProceedings{Yeh_2018_CVPR,
author = {Yeh, Raymond A. and Do, Minh N. and Schwing, Alexander G.},
title = {Unsupervised Textual Grounding: Linking Words to Image Concepts},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}