V-MIND: Building Versatile Monocular Indoor 3D Detector with Diverse 2D Annotations

Jin-Cheng Jhang, Tao Tu, Fu-En Wang, Ke Zhang, Min Sun, Cheng-Hao Kuo; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 9559-9568

Abstract


The field of indoor monocular 3D object detection is gaining significant attention fueled by the increasing demand in VR/AR and robotic applications. However its advancement is impeded by the limited availability and diversity of 3D training data owing to the labor-intensive nature of 3D data collection and annotation processes. In this paper we present V-MIND (Versatile Monocular INdoor Detector) which enhances the performance of indoor 3D detectors across a diverse set of object classes by harnessing publicly available large-scale 2D datasets. By leveraging well-established monocular depth estimation techniques and camera intrinsic predictors we can generate 3D training data by converting large-scale 2D images into 3D point clouds and subsequently deriving pseudo 3D bounding boxes. To mitigate distance errors inherent in the converted point clouds we introduce a novel 3D self-calibration loss for refining the pseudo 3D bounding boxes during training. Additionally we propose a novel ambiguity loss to address the ambiguity that arises when introducing new classes from 2D datasets. Finally through joint training with existing 3D datasets and pseudo 3D bounding boxes derived from 2D datasets V-MIND achieves state-of-the-art object detection performance across a wide range of classes on the Omni3D indoor dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Jhang_2025_WACV, author = {Jhang, Jin-Cheng and Tu, Tao and Wang, Fu-En and Zhang, Ke and Sun, Min and Kuo, Cheng-Hao}, title = {V-MIND: Building Versatile Monocular Indoor 3D Detector with Diverse 2D Annotations}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {9559-9568} }