Cutting Edge: Soft Correspondences in Multimodal Scene Parsing
Sarah Taghavi Namin, Mohammad Najafi, Mathieu Salzmann, Lars Petersson; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1188-1196
Abstract
Exploiting multiple modalities for semantic scene parsing has been shown to improve accuracy over the single modality scenario. Existing methods, however, assume that corresponding regions in two modalities have the same label. In this paper, we address the problem of data misalignment and label inconsistencies, e.g., due to moving objects, in semantic labeling, which violate the assumption of existing techniques. To this end, we formulate multimodal semantic labeling as inference in a CRF, and introduce latent nodes to explicitly model inconsistencies between two domains. These latent nodes allow us not only to leverage information from both domains to improve their labeling, but also to cut the edges between inconsistent regions. To eliminate the need for hand tuning the parameters of our model, we propose to learn intra-domain and inter-domain potential functions from training data. We demonstrate the benefits of our approach on two publicly available datasets containing 2D imagery and 3D point clouds. Thanks to our latent nodes and our learning strategy, our method outperforms the state-of-the-art in both cases.
Related Material
[pdf]
[
bibtex]
@InProceedings{Namin_2015_ICCV,
author = {Namin, Sarah Taghavi and Najafi, Mohammad and Salzmann, Mathieu and Petersson, Lars},
title = {Cutting Edge: Soft Correspondences in Multimodal Scene Parsing},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}