PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation

Danfei Xu, Dragomir Anguelov, Ashesh Jain; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 244-253

Abstract


We present PointFusion, a generic 3D object detection method that leverages both image and 3D point cloud information. Unlike existing methods that either use multi-stage pipelines or hold sensor and dataset-specific assumptions, PointFusion is conceptually simple and application-agnostic. The image data and the raw point cloud data are independently processed by a CNN and a PointNet architecture, respectively. The resulting outputs are then combined by a novel fusion network, which predicts multiple 3D box hypotheses and their confidences, using the input 3D points as spatial anchors. We evaluate PointFusion on two distinctive datasets: the KITTI dataset that features driving scenes captured with a lidar-camera setup, and the SUN-RGBD dataset that captures indoor environments with RGB-D cameras. Our model is the first one that is able to perform on par or better than the state-of-the-art on these diverse datasets without any dataset-specific model tuning.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Xu_2018_CVPR,
author = {Xu, Danfei and Anguelov, Dragomir and Jain, Ashesh},
title = {PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}