Semantically-Aware Aerial Reconstruction From Multi-Modal Data

Randi Cabezas, Julian Straub, John W. Fisher III; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2156-2164

Abstract


We consider a methodology for integrating multiple sensors along with semantic information to enhance scene representations. We propose a probabilistic generative model for inferring semantically-informed aerial reconstructions from multi-modal data within a consistent mathematical framework. The approach, called Semantically- Aware Aerial Reconstruction (SAAR), not only exploits inferred scene geometry, appearance, and semantic observations to obtain a meaningful categorization of the data, but also extends previously proposed methods by imposing structure on the prior over geometry, appearance, and semantic labels. This leads to more accurate reconstructions and the ability to fill in missing contextual labels via joint sensor and semantic information. We introduce a new multi-modal synthetic dataset in order to provide quantitative performance analysis. Additionally, we apply the model to real-world data and exploit OpenStreetMap as a source of semantic observations. We show quantitative improvements in reconstruction accuracy of large-scale urban scenes from the combination of LiDAR, aerial photography, and semantic data. Furthermore, we demonstrate the model's ability to fill in for missing sensed data, leading to more interpretable reconstructions.

Related Material


[pdf]
[bibtex]
@InProceedings{Cabezas_2015_ICCV,
author = {Cabezas, Randi and Straub, Julian and Fisher, III, John W.},
title = {Semantically-Aware Aerial Reconstruction From Multi-Modal Data},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}