Spatial-Aware Object Embeddings for Zero-Shot Localization and Classification of Actions
Pascal Mettes, Cees G. M. Snoek; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4443-4452
Abstract
We aim for zero-shot localization and classification of human actions in video. Where traditional approaches rely on global attribute or object classification scores for their zero-shot knowledge transfer, our main contribution is a spatial-aware object embedding. To arrive at spatial awareness, we build our embedding on top of freely available actor and object detectors. Relevance of objects is determined in a word embedding space and further enforced with estimated spatial preferences. Besides local object awareness, we also embed global object awareness into our embedding to maximize actor and object interaction. Finally, we exploit the object positions and sizes in the spatial-aware embedding to demonstrate a new spatio-temporal action retrieval scenario with composite queries. Action localization and classification experiments on four contemporary action video datasets support our proposal. Apart from state-of-the-art results in the zero-shot localization and classification settings, our spatial-aware embedding is even competitive with recent supervised action localization alternatives.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Mettes_2017_ICCV,
author = {Mettes, Pascal and Snoek, Cees G. M.},
title = {Spatial-Aware Object Embeddings for Zero-Shot Localization and Classification of Actions},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}