Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training
Yang Zou, Zhiding Yu, B.V.K. Vijaya Kumar, Jinsong Wang; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 289-305
Abstract
Recent deep networks achieved state of the art performanceon a variety of semantic segmentation tasks. Despite such progress, thesemodels often face challenges in real world “wild tasks” where large differ-ence between labeled training/source data and unseen test/target dataexists. In particular, such difference is often referred to as “domain gap”,and could cause significantly decreased performance which cannot beeasily remedied by further increasing the representation power. Unsuper-vised domain adaptation (UDA) seeks to overcome such problem withouttarget domain labels. In this paper, we propose a novel UDA frameworkbased on an iterative self-training (ST) procedure, where the problemis formulated as latent variable loss minimization, and can be solved byalternatively generating pseudo labels on target data and re-training themodel with these labels. On top of ST, we also propose a novel class-balanced self-training (CBST) framework to avoid the gradual domi-nance of large classes on pseudo-label generation, and introduce spatialpriors to refine generated labels. Comprehensive experiments show thatthe proposed methods achieve state of the art semantic segmentationperformance under multiple major UDA settings.
Related Material
[pdf]
[
bibtex]
@InProceedings{Zou_2018_ECCV,
author = {Zou, Yang and Yu, Zhiding and Kumar, B.V.K. Vijaya and Wang, Jinsong},
title = {Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}