Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes

Yang Zhang, Philip David, Boqing Gong; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2020-2030

Abstract


During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models' performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Zhang_2017_ICCV,
author = {Zhang, Yang and David, Philip and Gong, Boqing},
title = {Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}