What Would You Expect? Anticipating Egocentric Actions With Rolling-Unrolling LSTMs and Modality Attention

Antonino Furnari, Giovanni Maria Farinella; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 6252-6261

Abstract


Egocentric action anticipation consists in understanding which objects the camera wearer will interact with in the near future and which actions they will perform. We tackle the problem proposing an architecture able to anticipate actions at multiple temporal scales using two LSTMs to 1) summarize the past, and 2) formulate predictions about the future. The input video is processed considering three complimentary modalities: appearance (RGB), motion (optical flow) and objects (object-based features). Modality-specific predictions are fused using a novel Modality ATTention (MATT) mechanism which learns to weigh modalities in an adaptive fashion. Extensive evaluations on two large-scale benchmark datasets show that our method outperforms prior art by up to +7% on the challenging EPIC-Kitchens dataset including more than 2500 actions, and generalizes to EGTEA Gaze+. Our approach is also shown to generalize to the tasks of early action recognition and action recognition. Our method is ranked first in the public leaderboard of the EPIC-Kitchens egocentric action anticipation challenge 2019. Please see the project web page for code and additional details: http://iplab.dmi.unict.it/rulstm.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Furnari_2019_ICCV,
author = {Furnari, Antonino and Farinella, Giovanni Maria},
title = {What Would You Expect? Anticipating Egocentric Actions With Rolling-Unrolling LSTMs and Modality Attention},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}