Semantic Segmentation of RGBD Videos With Recurrent Fully Convolutional Neural Networks

Ekrem Emre Yurdakul, Yucel Yemez; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 367-374

Abstract


Semantic segmentation of videos using neural networks is currently a popular task, the work done in this field is however mostly on RGB videos. The main reason for this is the lack of large RGBD video datasets, annotated with ground truth information at the pixel level. In this work, we use a synthetic RGBD video dataset to investigate the contribution of depth and temporal information to the video segmentation task using convolutional and recurrent neural network architectures. Our experiments show the addition of depth information improves semantic segmentation results and exploiting temporal information results in higher quality output segmentations.

Related Material


[pdf]
[bibtex]
@InProceedings{Yurdakul_2017_ICCV,
author = {Emre Yurdakul, Ekrem and Yemez, Yucel},
title = {Semantic Segmentation of RGBD Videos With Recurrent Fully Convolutional Neural Networks},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}