-
[pdf]
[arXiv]
[bibtex]@InProceedings{Sanders_2024_CVPR, author = {Sanders, Kate and Van Durme, Benjamin}, title = {A Survey of Video Datasets for Grounded Event Understanding}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7314-7327} }
A Survey of Video Datasets for Grounded Event Understanding
Abstract
While existing video benchmarks largely consider specialized downstream tasks like retrieval or question-answering (QA) contemporary multimodal AI systems must be capable of well-rounded common-sense reasoning akin to human visual understanding. A critical component of human temporal-visual perception is our ability to identify and cognitively model "things happening" or events. Historically video benchmark tasks have implicitly tested for this ability (e.g. video captioning in which models describe visual events with natural language) but they do not consider video event understanding as a task in itself. Recent work has begun to explore video analogues to textual event extraction but consists of competing task definitions and datasets limited to highly specific event types. Therefore while there is a rich domain of event-centric video research spanning the past 10+ years it is unclear how video event understanding should be framed and what resources we have to study it. In this paper we survey 105 video datasets that require event understanding capability consider how they contribute to the study of robust event understanding in video and assess proposed video event extraction tasks in the context of this body of research. We propose suggestions informed by this survey for dataset curation and task framing with an emphasis on the uniquely temporal nature of video events and ambiguity in visual content.
Related Material