Depth-aware CNN for RGB-D Segmentation
Weiyue Wang, Ulrich Neumann; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 135-150
Abstract
Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.
Related Material
[pdf]
[
bibtex]
@InProceedings{Wang_2018_ECCV,
author = {Wang, Weiyue and Neumann, Ulrich},
title = {Depth-aware CNN for RGB-D Segmentation},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}