FGCN: Deep Feature-Based Graph Convolutional Network for Semantic Segmentation of Urban 3D Point Clouds

Saqib Ali Khan, Yilei Shi, Muhammad Shahzad, Xiao Xiang Zhu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 198-199

Abstract


Directly processing 3D point clouds using convolutional neural networks (CNNs) is a highly challenging task primarily due to the lack of explicit neighborhood relationship between points in 3D space. Several researchers have tried to cope with this problem using a preprocessing step of voxelization. Although, this allows to translate the existing CNN architectures to process 3D point clouds but, in addition to computational and memory constraints, it poses quantization artifacts which limits the accurate inference of the underlying object's structure in the illuminated scene. In this paper, we have introduced a more stable and effective end-to-end architecture to classify raw 3D point clouds from indoor and outdoor scenes. In the proposed methodology, we encode the spatial arrangement of neighbouring 3D points inside an undirected symmetrical graph, which is passed along with features extracted from a 2D CNN to a Graph Convolutional Network (GCN) that contains three layers of localized graph convolutions to generate a complete segmentation map. The proposed network achieves on par or even better than state-of-the-art results on tasks like semantic scene parsing, part segmentation and urban classification on three standard benchmark datasets.

Related Material


[pdf]
[bibtex]
@InProceedings{Khan_2020_CVPR_Workshops,
author = {Khan, Saqib Ali and Shi, Yilei and Shahzad, Muhammad and Zhu, Xiao Xiang},
title = {FGCN: Deep Feature-Based Graph Convolutional Network for Semantic Segmentation of Urban 3D Point Clouds},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}