Discovering Spatio-Temporal Rationales for Video Question Answering

Yicong Li, Junbin Xiao, Chun Feng, Xiang Wang, Tat-Seng Chua; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 13869-13878


This paper strives to solve complex video question answering (VideoQA) which features long videos containing multiple objects and events at different time. To tackle the challenge, we highlight the importance of identifying question-critical temporal moments and spatial objects from the vast amount of video content. Towards this, we propose a Spatio-Temporal Rationalizer (STR), a differentiable selection module that adaptively collects question-critical moments and objects using cross-modal interaction. The discovered video moments and objects are then served as grounded rationales to support answer reasoning. Based on STR, we further propose TranSTR, a Transformer-style neural network architecture that takes STR as the core and additionally underscores a novel answer interaction mechanism to coordinate STR for answer decoding. Experiments on four datasets show that TranSTR achieves new state-of-the-art (SoTA). Especially, on NExT-QA and Causal-VidQA which feature complex VideoQA, it significantly surpasses the previous SoTA by 5.8% and 6.8%, respectively. We then conduct extensive studies to verify the importance of STR as well as the proposed answer interaction mechanism. With the success of TranSTR and our comprehensive analysis, we hope this work can spark more future efforts in complex VideoQA. Our results are fully reproducible at

Related Material

[pdf] [arXiv]
@InProceedings{Li_2023_ICCV, author = {Li, Yicong and Xiao, Junbin and Feng, Chun and Wang, Xiang and Chua, Tat-Seng}, title = {Discovering Spatio-Temporal Rationales for Video Question Answering}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {13869-13878} }