4D Panoptic Segmentation as Invariant and Equivariant Field Prediction

Minghan Zhu, Shizhong Han, Hong Cai, Shubhankar Borse, Maani Ghaffari, Fatih Porikli; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 22488-22498

Abstract


In this paper, we develop rotation-equivariant neural networks for 4D panoptic segmentation. 4D panoptic segmentation is a benchmark task for autonomous driving that requires recognizing semantic classes and object instances on the road based on LiDAR scans, as well as assigning temporally consistent IDs to instances across time. We observe that the driving scenario is symmetric to rotations on the ground plane. Therefore, rotation-equivariance could provide better generalization and more robust feature learning. Specifically, we review the object instance clustering strategies and restate the centerness-based approach and the offset-based approach as the prediction of invariant scalar fields and equivariant vector fields. Other sub-tasks are also unified from this perspective, and different invariant and equivariant layers are designed to facilitate their predictions. Through evaluation on the standard 4D panoptic segmentation benchmark of SemanticKITTI, we show that our equivariant models achieve higher accuracy with lower computational costs compared to their non-equivariant counterparts. Moreover, our method sets the new state-of-the-art performance and achieves 1st place on the SemanticKITTI 4D Panoptic Segmentation leaderboard.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhu_2023_ICCV, author = {Zhu, Minghan and Han, Shizhong and Cai, Hong and Borse, Shubhankar and Ghaffari, Maani and Porikli, Fatih}, title = {4D Panoptic Segmentation as Invariant and Equivariant Field Prediction}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {22488-22498} }