Sparse Point Guided 3D Lane Detection

Chengtang Yao, Lidong Yu, Yuwei Wu, Yunde Jia; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 8363-8372

Abstract


3D lane detection usually builds a dense correspondence between the front-view space and the BEV space to estimate lane points in the 3D space. 3D lanes only occupy a small ratio of the dense correspondence, while most correspondence belongs to the redundant background. This sparsity phenomenon bottlenecks valuable computation and raises the computation cost of building a high-resolution correspondence for accurate results. In this paper, we propose a sparse point-guided 3D lane detection, focusing on points related to 3D lanes. Our method runs in a coarse-to-fine manner, including coarse-level lane detection and iterative fine-level sparse point refinements. In coarse-level lane detection, we build a dense but efficient correspondence between the front view and BEV space at a very low resolution to compute coarse lanes. Then in fine-level sparse point refinement, we sample sparse points around coarse lanes to extract local features from the high-resolution front-view feature map. The high-resolution local information brought by sparse points refines 3D lanes in the BEV space hierarchically from low resolution to high resolution. The sparse point guides a more effective information flow and greatly promotes the SOTA result by 3 points on the overall F1-score and 6 points on several hard situations while reducing almost half memory cost and speeding up 2 times.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yao_2023_ICCV, author = {Yao, Chengtang and Yu, Lidong and Wu, Yuwei and Jia, Yunde}, title = {Sparse Point Guided 3D Lane Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {8363-8372} }