Anticipative Video Transformer

Rohit Girdhar, Kristen Grauman; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13505-13515

Abstract


We propose Anticipative Video Transformer (AVT), an end-to-end attention-based video modeling architecture that attends to the previously observed video in order to anticipate future actions. We train the model jointly to predict the next action in a video sequence, while also learning frame feature encoders that are predictive of successive future frames' features. Compared to existing temporal aggregation strategies, AVT has the advantage of both maintaining the sequential progression of observed actions while still capturing long-range dependencies--both critical for the anticipation task. Through extensive experiments, we show that AVT obtains the best reported performance on four popular action anticipation benchmarks: EpicKitchens-55, EpicKitchens-100, EGTEA Gaze+, and 50-Salads; and it wins first place in the EpicKitchens-100 CVPR'21 challenge.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Girdhar_2021_ICCV, author = {Girdhar, Rohit and Grauman, Kristen}, title = {Anticipative Video Transformer}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13505-13515} }