MMTF: Multi-Modal Temporal Fusion for Commonsense Video Question Answering

Mobeen Ahmad, Geonwoo Park, Dongchan Park, Sanguk Park; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 4657-4662

Abstract


Video question answering is a challenging task that requires understanding the video and question in the same context. This becomes even harder when the questions involve reasoning, such as predicting future events or explaining counterfactual events, because they need knowledge not explicitly shown. Existing methods use coarse-grained fusion of video and language features, ignoring temporal information. To address this, we propose a novel vision-text fusion module that learns the temporal context of the video and question. Our module expands question tokens along the video's temporal axis and fuses them with video features to generate new representations with local and global context. We evaluated our method on four VideoQA datasets, including MSVD-QA, NExT-QA, Causal-VidQA, and AGQA-2.0.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ahmad_2023_ICCV, author = {Ahmad, Mobeen and Park, Geonwoo and Park, Dongchan and Park, Sanguk}, title = {MMTF: Multi-Modal Temporal Fusion for Commonsense Video Question Answering}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {4657-4662} }