-
[pdf]
[supp]
[bibtex]@InProceedings{Yang_2023_ICCV, author = {Yang, Yuwei and Hayat, Munawar and Jin, Zhao and Zhu, Hongyuan and Lei, Yinjie}, title = {Zero-Shot Point Cloud Segmentation by Semantic-Visual Aware Synthesis}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {11586-11596} }
Zero-Shot Point Cloud Segmentation by Semantic-Visual Aware Synthesis
Abstract
This paper proposes a feature synthesis approach for zero-shot semantic segmentation of 3D point clouds, enabling generalization to previously unseen categories. Given only the class-level semantic information for unseen objects, we strive to enhance the correspondence, alignment and consistency between the visual and semantic spaces, to synthesise diverse, generic and transferable visual features. We develop a masked learning strategy to promote diversity within the same class visual features and enhance the separation between different classes. We further cast the visual features into a prototypical space to model their distribution for alignment with the corresponding semantic space. Finally, we develop a consistency regularizer to preserve the semantic-visual relationships between the real-seen features and synthetic-unseen features. Our approach shows considerable semantic segmentation gains on ScanNet, S3DIS and SemanticKITTI benchmarks. Our code is available at: https://github.com/leolyj/3DPC-GZSL
Related Material