VV-Net: Voxel VAE Net With Group Convolutions for Point Cloud Segmentation

Hsien-Yu Meng, Lin Gao, Yu-Kun Lai, Dinesh Manocha; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8500-8508

Abstract


We present a novel algorithm for point cloud segmentation.Our approach transforms unstructured point clouds into regular voxel grids, and further uses a kernel-based interpolated variational autoencoder (VAE) architecture to encode the local geometry within each voxel.Traditionally, the voxel representation only comprises Boolean occupancy information, which fails to capture the sparsely distributed points within voxels in a compact manner. In order to handle sparse distributions of points, we further employ radial basis functions (RBF) to compute a local, continuous representation within each voxel. Our approach results in a good volumetric representation that effectively tackles noisy point cloud datasets and is more robust for learning. Moreover, we further introduce group equivariant CNN to 3D, by defining the convolution operator on a symmetry group acting on Z ^3 and its isomorphic sets. This improves the expressive capacity without increasing parameters, leading to more robust segmentation results.We highlight the performance on standard benchmarks and show that our approach outperforms state-of-the-art segmentation algorithms on the ShapeNet and S3DIS datasets.

Related Material


[pdf]
[bibtex]
@InProceedings{Meng_2019_ICCV,
author = {Meng, Hsien-Yu and Gao, Lin and Lai, Yu-Kun and Manocha, Dinesh},
title = {VV-Net: Voxel VAE Net With Group Convolutions for Point Cloud Segmentation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}