Love Thy Neighbors: Image Annotation by Exploiting Image Metadata

Justin Johnson, Lamberto Ballan, Li Fei-Fei; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 4624-4632

Abstract


Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically; in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata.

Related Material


[pdf]
[bibtex]
@InProceedings{Johnson_2015_ICCV,
author = {Johnson, Justin and Ballan, Lamberto and Fei-Fei, Li},
title = {Love Thy Neighbors: Image Annotation by Exploiting Image Metadata},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}