Fusion-SUNet: Spatial Layout Consistency for 3D Semantic Segmentation

Maryam Jameela, Gunho Sohn, Sunghwan Yoo; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 6568-6576


The paper discusses the need for a reliable and efficient computer vision system to inspect utility networks with minimal human involvement, due to the aging infrastructure of these networks. We propose a deep learning technique, Fusion-Semantic Utility Network (Fusion-SUNet), to classify the dense and irregular point clouds obtained from the airborne laser terrain mapping (ALTM) system used for data collection. The proposed network combines two networks to achieve voxel-based semantic segmentation of the point clouds at multi-resolution with object categories in three dimensions and predict two-dimensional regional labels distinguishing corridor regions from non-corridors. The network imposes spatial layout consistency on the features of the voxel-based 3D network using regional segmentation features. The authors demonstrate the effectiveness of the proposed technique by testing it on 67km^2 of utility corridor data with average density of 5pp/m2, achieving significantly better performance compared to the state-of-the-art baseline network, with an F1 score of 93% for pylon class, 99% for ground class, 99% for vegetation class, and 98% for powerline class.

Related Material

@InProceedings{Jameela_2023_CVPR, author = {Jameela, Maryam and Sohn, Gunho and Yoo, Sunghwan}, title = {Fusion-SUNet: Spatial Layout Consistency for 3D Semantic Segmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {6568-6576} }