Layout-Induced Video Representation for Recognizing Agent-in-Place Actions

Ruichi Yu, Hongcheng Wang, Ang Li, Jingxiao Zheng, Vlad I. Morariu, Larry S. Davis; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 1262-1272

Abstract


We address scene layout modeling for recognizing agent-in-place actions, which are actions associated with agents who perform them and the places where they occur, in the context of outdoor home surveillance. We introduce a novel representation to model the geometry and topology of scene layouts so that a network can generalize from the layouts observed in the training scenes to unseen scenes in the test set. This Layout-Induced Video Representation (LIVR) abstracts away low-level appearance variance and encodes geometric and topological relationships of places to explicitly model scene layout. LIVR partitions the semantic features of a scene into different places to force the network to learn generic place-based feature descriptions which are independent of specific scene layouts; then, LIVR dynamically aggregates features based on connectivities of places in each specific scene to model its layout. We introduce a new Agent-in-Place Action (APA) dataset(The dataset is pending legal review and will be released upon the acceptance of this paper.) to show that our method allows neural network models to generalize significantly better to unseen scenes.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yu_2019_ICCV,
author = {Yu, Ruichi and Wang, Hongcheng and Li, Ang and Zheng, Jingxiao and Morariu, Vlad I. and Davis, Larry S.},
title = {Layout-Induced Video Representation for Recognizing Agent-in-Place Actions},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}