Exploiting the Complementarity of 2D and 3D Networks To Address Domain-Shift in 3D Semantic Segmentation

Adriano Cardace, Pierluigi Zama Ramirez, Samuele Salti, Luigi Di Stefano; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 98-109

Abstract


3D semantic segmentation is a critical task in many real-world applications, such as autonomous driving, robotics, and mixed reality. However, the task is extremely challenging due to ambiguities coming from the unstructured, sparse, and uncolored nature of the 3D point clouds. A possible solution is to combine the 3D information with others coming from sensors featuring a different modality, such as RGB cameras. Recent multi-modal 3D semantic segmentation networks exploit these modalities relying on two branches that process the 2D and 3D information independently, striving to maintain the strength of each modality. In this work, we first explain why this design choice is effective and then show how it can be improved to make the multi-modal semantic segmentation more robust to domain shift. Our surprisingly simple contribution achieves state-of-the-art performances on four popular multi-modal unsupervised domain adaptation benchmarks, as well as better results in a domain generalization scenario.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Cardace_2023_CVPR, author = {Cardace, Adriano and Ramirez, Pierluigi Zama and Salti, Samuele and Di Stefano, Luigi}, title = {Exploiting the Complementarity of 2D and 3D Networks To Address Domain-Shift in 3D Semantic Segmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {98-109} }