-
[pdf]
[supp]
[bibtex]@InProceedings{Gunn_2024_CVPR, author = {Gunn, James and Lenyk, Zygmunt and Sharma, Anuj and Donati, Andrea and Buburuzan, Alexandru and Redford, John and Mueller, Romain}, title = {Lift-Attend-Splat: Bird's-eye-view Camera-lidar Fusion using Transformers}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {4526-4536} }
Lift-Attend-Splat: Bird's-eye-view Camera-lidar Fusion using Transformers
Abstract
Combining complementary sensor modalities is crucial to providing robust perception for safety-critical robotics applications such as autonomous driving (AD). Recent state-of-the-art camera-lidar fusion methods for AD rely on monocular depth estimation which is a notoriously difficult task compared to using depth information from the lidar directly. Here we find that this approach does not leverage depth as expected and show that naively improving depth estimation does not lead to improvements in object detection performance. Strikingly we also find that removing depth estimation altogether does not degrade object detection performance substantially suggesting that relying on monocular depth could be an unnecessary architectural bottleneck during camera lidar fusion. In this work we introduce a novel fusion method that bypasses monocular depth estimation altogether and instead selects and fuses camera and lidar features in a bird's-eye-view grid using a simple attention mechanism. We show that our model can modulate its use of camera features based on the availability of lidar features and that it yields better 3D object detection on the nuScenes dataset than baselines relying on monocular depth estimation.
Related Material