Penalizing Top Performers: Conservative Loss for Semantic Segmentation Adaptation
Xinge Zhu, Hui Zhou, Ceyuan Yang, Jianping Shi, Dahua Lin; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 568-583
Abstract
Due to the expensive and time-consuming annotations (e.g., segmentation) for real-world images, recent works in computer vision resort to synthetic data. However, the performance on the real image often drops significantly because of the domain shift between the synthetic data and the real images. In this setting, domain adaptation brings an appealing option. The effective approaches of domain adaptation shape the representations that (1) are discriminative for the main task and (2) have good generalization capability for domain shift. To this end, we propose a novel loss function, i.e., Conservative Loss, which penalizes the extreme good and bad cases while encouraging the moderate examples. More specifically, it enables the network to learn features that are discriminative by gradient descent and are invariant to the change of domains via gradient ascend method. Extensive experiments on synthetic to real segmentation adaptation show our proposed method achieves state of the art results. Ablation studies give more insights into properties of the Conservative Loss. Additional exploratory experiments and discussion demonstrate that our Conservative Loss has good scalability rather than restricting an exact form.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Zhu_2018_ECCV,
author = {Zhu, Xinge and Zhou, Hui and Yang, Ceyuan and Shi, Jianping and Lin, Dahua},
title = {Penalizing Top Performers: Conservative Loss for Semantic Segmentation Adaptation},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}