DQS3D: Densely-matched Quantization-aware Semi-supervised 3D Detection

Huan-ang Gao, Beiwen Tian, Pengfei Li, Hao Zhao, Guyue Zhou; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 21905-21915

Abstract


In this paper, we study the problem of semi-supervised 3D object detection, which is of great importance considering the high annotation cost for cluttered 3D indoor scenes. We resort to the robust and principled framework of self-teaching, which has triggered notable progress for semi-supervised learning recently. While this paradigm is natural for image-level or pixel-level prediction, adapting it to the detection problem is challenged by the issue of proposal matching. Prior methods are based upon two-stage pipelines, matching heuristically selected proposals generated in the first stage and resulting in spatially sparse training signals. In contrast, we propose the first semi-supervised 3D detection algorithm that works in the single-stage manner and allows spatially dense training signals. A fundamental issue of this new design is the quantization error caused by point-to-voxel discretization, which inevitably leads to misalignment between two transformed views in the voxel domain. To this end, we derive and implement closed-form rules that compensate this misalignment on-the-fly. Our results are significant, e.g., promoting ScanNet mAP@0.5 from 35.2% to 48.5% using 20% annotation. Codes and data are publicly available.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Gao_2023_ICCV, author = {Gao, Huan-ang and Tian, Beiwen and Li, Pengfei and Zhao, Hao and Zhou, Guyue}, title = {DQS3D: Densely-matched Quantization-aware Semi-supervised 3D Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {21905-21915} }