Cascaded Feature Network for Semantic Segmentation of RGB-D Images
Di Lin, Guangyong Chen, Daniel Cohen-Or, Pheng-Ann Heng, Hui Huang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1311-1319
Abstract
Fully convolutional network (FCN) has been successfully applied in semantic segmentation of scenes represented with RGB images. Images augmented with depth channel provide more understanding of the geometric information of the scene in the image. The question is how to best exploit this additional information to improve the segmentation performance. In this paper, we present a neural network with multiple branches for segmenting RGB-D images. Our approach is to use the available depth to split the image into layers with common visual characteristic of objects/scenes, or common "scene-resolution". We introduce context-aware receptive field (CaRF) which provides a better control on the relevant contextual information of the learned features. Equipped with CaRF, each branch of the network semantically segments relevant similar scene-resolution, leading to a more focused domain which is easier to learn. Furthermore, our network is cascaded with features from one branch augmenting the features of adjacent branch. We show that such cascading of features enriches the contextual information of each branch and enhances the overall performance. The accuracy that our network achieves outperforms the state-of-the-art methods on two public datasets.
Related Material
[pdf]
[
bibtex]
@InProceedings{Lin_2017_ICCV,
author = {Lin, Di and Chen, Guangyong and Cohen-Or, Daniel and Heng, Pheng-Ann and Huang, Hui},
title = {Cascaded Feature Network for Semantic Segmentation of RGB-D Images},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}