Spatio-Temporal Person Retrieval via Natural Language Queries
Masataka Yamaguchi, Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1453-1462
Abstract
In this paper, we address the problem of spatio-temporal person retrieval from videos using a natural language query, in which we output a tube (i.e., a sequence of bounding boxes) which encloses the person described by the query. For this problem, we introduce a novel dataset consisting of videos containing people annotated with bounding boxes for each second and with five natural language descriptions. To retrieve the tube of the person described by a given natural language query, we design a model that combines methods for spatio-temporal human detection and multimodal retrieval. We conduct comprehensive experiments to compare a variety of tube and text representations and multimodal retrieval methods, and present a strong baseline in this task as well as demonstrate the efficacy of our tube representation and multimodal feature embedding technique. Finally, we demonstrate the versatility of our model by applying it to two other important tasks.
Related Material
[pdf]
[supp]
[arXiv]
[
bibtex]
@InProceedings{Yamaguchi_2017_ICCV,
author = {Yamaguchi, Masataka and Saito, Kuniaki and Ushiku, Yoshitaka and Harada, Tatsuya},
title = {Spatio-Temporal Person Retrieval via Natural Language Queries},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}