Rethinking Range View Representation for LiDAR Segmentation

Lingdong Kong, Youquan Liu, Runnan Chen, Yuexin Ma, Xinge Zhu, Yikang Li, Yuenan Hou, Yu Qiao, Ziwei Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 228-240

Abstract


LiDAR segmentation is crucial for autonomous driving perception. Recent trends favor point- or voxel-based methods as they often yield better performance than the traditional range view representation. In this work, we unveil several key factors in building powerful range view models. We observe that the "many-to-one" mapping, semantic incoherence, and shape deformation are possible impediments against effective learning from range view projections. We present RangeFormer -- a full-cycle framework comprising novel designs across network architecture, data augmentation, and post-processing -- that better handles the learning and processing of LiDAR point clouds from the range view. We further introduce a Scalable Training from Range view (STR) strategy that trains on arbitrary low-resolution 2D range images, while still maintaining satisfactory 3D segmentation accuracy. We show that, for the first time, a range view method is able to surpass the point, voxel, and multi-view fusion counterparts in the competing LiDAR semantic and panoptic segmentation benchmarks, i.e., SemanticKITTI, nuScenes, and ScribbleKITTI.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Kong_2023_ICCV, author = {Kong, Lingdong and Liu, Youquan and Chen, Runnan and Ma, Yuexin and Zhu, Xinge and Li, Yikang and Hou, Yuenan and Qiao, Yu and Liu, Ziwei}, title = {Rethinking Range View Representation for LiDAR Segmentation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {228-240} }