360+x: A Panoptic Multi-modal Scene Understanding Dataset

Hao Chen, Yuqi Hou, Chenyuan Qu, Irene Testini, Xiaohan Hong, Jianbo Jiao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 19373-19382

Abstract


Human perception of the world is shaped by a multitude of viewpoints and modalities. While many existing datasets focus on scene understanding from a certain perspective (e.g. egocentric or third-person views) our dataset offers a panoptic perspective (i.e. multiple viewpoints with multiple data modalities). Specifically we encapsulate third-person panoramic and front views as well as egocentric monocular/binocular views with rich modalities including video multi-channel audio directional binaural delay location data and textual scene descriptions within each scene captured presenting comprehensive observation of the world. To the best of our knowledge this is the first database that covers multiple viewpoints with multiple data modalities to mimic how daily information is accessed in the real world. Through our benchmark analysis we presented 5 different scene understanding tasks on the proposed 360+x dataset to evaluate the impact and benefit of each data modality and perspective in panoptic scene understanding. We hope this unique dataset could broaden the scope of comprehensive scene understanding and encourage the community to approach these problems from more diverse perspectives.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chen_2024_CVPR, author = {Chen, Hao and Hou, Yuqi and Qu, Chenyuan and Testini, Irene and Hong, Xiaohan and Jiao, Jianbo}, title = {360+x: A Panoptic Multi-modal Scene Understanding Dataset}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {19373-19382} }