PanoContext-Former: Panoramic Total Scene Understanding with a Transformer

Yuan Dong, Chuan Fang, Liefeng Bo, Zilong Dong, Ping Tan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 28087-28097

Abstract


Panoramic images enable deeper understanding and more holistic perception of 360 surrounding environment which can naturally encode enriched scene context information compared to standard perspective image. Previous work has made lots of effort to solve the scene understanding task in a hybrid solution based on 2D-3D geometric reasoning thus each sub-task is processed separately and few correlations are explored in this procedure. In this paper we propose a fully 3D method for holistic indoor scene understanding which recovers the objects' shapes oriented bounding boxes and the 3D room layout simultaneously from a single panorama. To maximize the exploration of the rich context information we design a transformer-based context module to predict the representation and relationship among each component of the scene. In addition we introduce a new dataset for scene understanding including photo-realistic panoramas high-fidelity depth images accurately annotated room layouts oriented object bounding boxes and shapes. Experiments on the synthetic and new datasets demonstrate that our method outperforms previous panoramic scene understanding methods in terms of both layout estimation and 3D object detection.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Dong_2024_CVPR, author = {Dong, Yuan and Fang, Chuan and Bo, Liefeng and Dong, Zilong and Tan, Ping}, title = {PanoContext-Former: Panoramic Total Scene Understanding with a Transformer}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {28087-28097} }