Recurrent Models for Situation Recognition

Arun Mallya, Svetlana Lazebnik; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 455-463

Abstract


This work proposes Recurrent Neural Network (RNN) models to predict structured 'image situations' -- actions and noun entities fulfilling semantic roles related to the action. In contrast to prior work relying on Conditional Random Fields (CRFs), we use a specialized action prediction network followed by an RNN for noun prediction. Our system obtains state-of-the-art accuracy on the challenging recent imSitu dataset, beating CRF-based models, including ones trained with additional data. Further, we show that specialized features learned from situation prediction can be transferred to the task of image captioning to more accurately describe human-object interactions.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Mallya_2017_ICCV,
author = {Mallya, Arun and Lazebnik, Svetlana},
title = {Recurrent Models for Situation Recognition},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}