Density-Guided Semi-Supervised 3D Semantic Segmentation with Dual-Space Hardness Sampling

Jianan Li, Qiulei Dong; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 3260-3269

Abstract


Densely annotating the large-scale point clouds is laborious. To alleviate the annotation burden contrastive learning has attracted increasing attention for tackling semi-supervised 3D semantic segmentation. However existing point-to-point contrastive learning techniques in literature are generally sensitive to outliers resulting in insufficient modeling of the point-wise representations. To address this problem we propose a method named DDSemi for semi-supervised 3D semantic segmentation where a density-guided contrastive learning technique is explored. This technique calculates the contrastive loss in a point-to-anchor manner by estimating an anchor for each class from the memory bank based on the finding that the cluster centers tend to be located in dense regions. In this technique an inter-contrast loss is derived from the perturbed unlabeled point cloud pairs while an intra-contrast loss is derived from a single unlabeled point cloud. The derived losses could enhance the discriminability of the features and implicitly constrain the semantic consistency between the perturbed unlabeled point cloud pairs. In addition we propose a dual-space hardness sampling strategy to pay more attention to the hard samples located in sparse regions of both the geometric space and feature space by reweighting the point-wise intra-contrast loss. Experimental results on both indoor-scene and outdoor-scene datasets demonstrate that the proposed method outperforms the comparative state-of-the-art semi-supervised methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Li_2024_CVPR, author = {Li, Jianan and Dong, Qiulei}, title = {Density-Guided Semi-Supervised 3D Semantic Segmentation with Dual-Space Hardness Sampling}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {3260-3269} }