Seeing What You're Told: Sentence-Guided Activity Recognition In Video

Narayanaswamy Siddharth, Andrei Barbu, Jeffrey Mark Siskind; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 732-739

Abstract


We present a system that demonstrates how the compositional structure of events, in concert with the compositional structure of language, can interplay with the underlying focusing mechanisms in video action recognition, providing a medium for top-down and bottom-up integration as well as multi-modal integration between vision and language. We show how the roles played by participants (nouns), their characteristics (adjectives), the actions performed (verbs), the manner of such actions (adverbs), and changing spatial relations between participants (prepositions), in the form of whole-sentence descriptions mediated by a grammar, guides the activity-recognition process. Further, the utility and expressiveness of our framework is demonstrated by performing three separate tasks in the domain of multi-activity video: sentence-guided focus of attention, generation of sentential description, and query-based search, simply by leveraging the framework in different manners.

Related Material


[pdf]
[bibtex]
@InProceedings{Siddharth_2014_CVPR,
author = {Siddharth, Narayanaswamy and Barbu, Andrei and Mark Siskind, Jeffrey},
title = {Seeing What You're Told: Sentence-Guided Activity Recognition In Video},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2014}
}