360-Indoor: Towards Learning Real-World Objects in 360deg Indoor Equirectangular Images

Shih-Han Chou, Cheng Sun, Wen-Yen Chang, Wan-Ting Hsu, Min Sun, Jianlong Fu; The IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 845-853

Abstract


While there are several widely used object detection datasets, current computer vision algorithms are still limited in conventional images. Such images narrow our vision in a restricted region. On the other hand, 360deg images provide a thorough sight. In this paper, our goal is to provide a standard dataset to facilitate the vision and machine learning communities in 360deg domain. To facilitate the research, we present a real-world 360deg panoramic object detection dataset, 360-Indoor, which is a new benchmark for visual object detection and class recognition in 360deg indoor images. It is achieved by gathering images of complex indoor scenes containing common objects and the intensive annotated bounding field-of-view. In addition, 360-Indoor has several distinct properties: (1) the largest category number (37 labels in total). (2) the most complete annotations on average (27 bounding boxes per image). The selected 37 objects are all common in indoor scene. With around 3k images and 90k labels in total, 360-Indoor achieves the largest dataset for detection in 360deg images. In the end, extensive experiments on the state-of-the-art methods for both classification and detection are provided. We will release this dataset in the near future.

Related Material


[pdf]
[bibtex]
@InProceedings{Chou_2020_WACV,
author = {Chou, Shih-Han and Sun, Cheng and Chang, Wen-Yen and Hsu, Wan-Ting and Sun, Min and Fu, Jianlong},
title = {360-Indoor: Towards Learning Real-World Objects in 360deg Indoor Equirectangular Images},
booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}