NU-Net: Deep Residual Wide Field of View Convolutional Neural Network for Semantic Segmentation

Mohamed Samy, Karim Amer, Kareem Eissa, Mahmoud Shaker, Mohamed ElHelw; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 267-271

Abstract


Semantic Segmentation of satellite images is one of the most challenging problems in computer vision as it requires a model capable of capturing both local and global information at each pixel. Current state of the art methods are based on Fully Convolutional Neural Networks (FCNN) with mostly two main components: an encoder which is a pretrained model on classification that gradually reduces the input spatial size and a decoder that transforms the encoder's feature map into a predicted mask with the original size. We change this conventional architecture to a model that makes use of the full resolution information. NU-Net is a deep FCNN that is able to capture wide field of global information around each pixel while maintaining full resolution information throughout the model. We evaluate our model on the Road Extraction track and Land Cover Classification track in Deep Globe competition.

Related Material


[pdf]
[bibtex]
@InProceedings{Samy_2018_CVPR_Workshops,
author = {Samy, Mohamed and Amer, Karim and Eissa, Kareem and Shaker, Mahmoud and ElHelw, Mohamed},
title = {NU-Net: Deep Residual Wide Field of View Convolutional Neural Network for Semantic Segmentation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}