HM-ViT: Hetero-Modal Vehicle-to-Vehicle Cooperative Perception with Vision Transformer

Hao Xiang, Runsheng Xu, Jiaqi Ma; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 284-295

Abstract


Vehicle-to-Vehicle technologies have enabled autonomous vehicles to share information to see through occlusions, greatly enhancing perception performance. Nevertheless, existing works all focused on homogeneous traffic where vehicles are equipped with the same type of sensors, which significantly hampers the scale of collaboration and benefit of cross-modality interactions. In this paper, we investigate the multi-agent hetero-modal cooperative perception problem where agents may have distinct sensor modalities. We present HM-ViT, the first unified multi-agent hetero-modal cooperative perception framework that can collaboratively predict 3D objects for highly dynamic Vehicle-to-Vehicle (V2V) collaborations with varying numbers and types of agents. To effectively fuse features from multi-view images and LiDAR point clouds, we design a novel heterogeneous 3D graph transformer to jointly reason inter-agent and intra-agent interactions. The extensive experiments on the V2V perception dataset OPV2V demonstrate that the HM-ViT outperforms SOTA cooperative perception methods for V2V hetero-modal cooperative perception. Our code will be released at https://github.com/XHwind/HM-ViT.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Xiang_2023_ICCV, author = {Xiang, Hao and Xu, Runsheng and Ma, Jiaqi}, title = {HM-ViT: Hetero-Modal Vehicle-to-Vehicle Cooperative Perception with Vision Transformer}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {284-295} }