NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions

Junbin Xiao, Xindi Shang, Angela Yao, Tat-Seng Chua; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 9777-9786

Abstract


We introduce NExT-QA, a rigorously designed video question answering (VideoQA) benchmark to advance video understanding from describing to explaining the temporal actions. Based on the dataset, we set up multi-choice and open-ended QA tasks targeting at causal action reasoning, temporal action reasoning and common scene comprehension. Through extensive analysis of baselines and established VideoQA techniques, we find that top-performing methods excel at shallow scene descriptions but are weak in causal and temporal action reasoning. Furthermore, the models that are effective on multi-choice QA, when adapted to open-ended QA, still struggle in generalizing the answers. This raises doubt on the ability of these models to reason and highlights possibilities for improvement. With detailed results for different question types and heuristic observations for future works, we hope NExT-QA will guide the next generation of VQA research to go beyond superficial description towards a deeper understanding of videos.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Xiao_2021_CVPR, author = {Xiao, Junbin and Shang, Xindi and Yao, Angela and Chua, Tat-Seng}, title = {NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {9777-9786} }