Video Captioning of Future Frames

Mehrdad Hosseinzadeh, Yang Wang; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 980-989

Abstract


Being able to anticipate and describe what may happen in the future is a fundamental ability for humans. Given a short clip of a scene about "a person is sitting behind a piano", humans can describe what will happen afterward, i.e. "the person is playing the piano". In this paper, we consider the task of captioning future events to assess the performance of intelligent models on anticipation and video description generation tasks simultaneously. More specifically, given only the frames relating to an occurring event (activity), the goal is to generate a sentence describing the most likely next event in the video. We tackle the problem by first predicting the next event in the semantic space of convolutional features, then fusing contextual information into those features, and feeding them to a captioning module. Departing from using recurrent units allows us to train the network in parallel. We compare the proposed method with a baseline and an oracle method on the ActivityNetCaptions dataset. Experimental results demonstrate that the proposed method outperforms the baseline and is comparable to the oracle method. We perform additional ablation study to further analyze our approach.

Related Material


[pdf]
[bibtex]
@InProceedings{Hosseinzadeh_2021_WACV, author = {Hosseinzadeh, Mehrdad and Wang, Yang}, title = {Video Captioning of Future Frames}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {980-989} }