CenterFusion: Center-Based Radar and Camera Fusion for 3D Object Detection

Ramin Nabati, Hairong Qi; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 1527-1536

Abstract


The perception system in autonomous vehicles is respon-sible for detecting and tracking the surrounding objects.This is usually done by taking advantage of several sens-ing modalities to increase robustness and accuracy, whichmakes sensor fusion a crucial part of the perception system.In this paper, we focus on the problem of radar and cam-era sensor fusion and propose a middle-fusion approachto exploit both radar and camera data for 3D object de-tection. Our approach, called CenterFusion, first uses acenter point detection network to detect objects by identifying their center points on the image. It then solves the key data association problem using a novel frustum-based method to associate the radar detections to their corresponding object's center point. The associated radar detections are used to generate radar-based feature maps to complement the image features, and regress to object properties such as depth, rotation and velocity. We evaluateCenterFusion on the challenging nuScenes dataset, where it improves the overall nuScenes Detection Score (NDS) of the state-of-the-art camera-based algorithm by more than12%. We further show that CenterFusion significantly improves the velocity estimation accuracy without using any additional temporal information. The code is available at https://github.com/mrnabati/CenterFusion.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Nabati_2021_WACV, author = {Nabati, Ramin and Qi, Hairong}, title = {CenterFusion: Center-Based Radar and Camera Fusion for 3D Object Detection}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {1527-1536} }