Joint Learning From Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps

Nicolas Audebert, Bertrand Le Saux, Sebastien Lefevre; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 67-75

Abstract


We investigate the use of OSM data for semantic labeling of EO images. Deep neural networks have been used in the past for remote sensing data classification from various sensors, including multispectral, hyperspectral, Radar and Lidar data. However, OSM is an abundant data source that has already been used as ground truth data, but rarely exploited as an input information layer. We study different use cases and deep network architectures to leverage this OSM data for semantic labeling of aerial and satellite images. Especially, we look into fusion based architectures and coarse-to-fine segmentation to include the OSM layer into multispectral-based deep fully convolutional networks. We illustrate how these methods can be used successfully on two public datasets: the ISPRS Potsdam and the DFC2017. We show that OSM data can efficiently be integrated into the vision-based deep learning models and that it significantly improves both the accuracy performance and the convergence.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Audebert_2017_CVPR_Workshops,
author = {Audebert, Nicolas and Le Saux, Bertrand and Lefevre, Sebastien},
title = {Joint Learning From Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}