Dense but Efficient VideoQA for Intricate Compositional Reasoning

Jihyeon Lee, Wooyoung Kang, Eun-Sol Kim; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 1114-1123

Abstract


It is well known that most of the conventional video question answering (VideoQA) datasets consist of easy questions requiring simple reasoning processes. However, long videos inevitably contain complex and compositional semantic structures along with the spatio-temporal axis, which requires a model to understand the compositional structures inherent in the videos. In this paper, we suggest a new compositional VideoQA method based on transformer architecture with a deformable attention mechanism to address the complex VideoQA tasks. The deformable attentions are introduced to sample a subset of informative visual features from the dense visual feature map to cover a temporally long range of frames efficiently. Furthermore, the dependency structure within the complex question sentences is also combined with the language embeddings to readily understand the relations among question words. Extensive experiments and ablation studies show that the suggested dense but efficient model outperforms other baselines.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Lee_2023_WACV, author = {Lee, Jihyeon and Kang, Wooyoung and Kim, Eun-Sol}, title = {Dense but Efficient VideoQA for Intricate Compositional Reasoning}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {1114-1123} }