Adaptive Affinity Fields for Semantic Segmentation

Tsung-Wei Ke, Jyh-Jing Hwang, Ziwei Liu , Stella X. Yu; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 587-602

Abstract


Existing semantic segmentation methods mostly rely on per-pixel supervision, unable to capture structural regularity present in natural images. Instead of learning to enforce semantic labels on individual pixels, we propose to enforce affinity field patterns in individual pixel neighbourhoods, i.e., the semantic label patterns of whether neighbour pixels are in the same segment should match between the prediction and the ground-truth. The affinity fields thus characterize the intrinsic geometric relationships inside a given scene, such as ``motorcycles have round wheels''. We further develop a novel method for learning the optimal neighbourhood size for each semantic category, with an adversarial loss that optimizes over worst-case scenarios. Unlike the popular Conditional Random Field approaches, our adaptive affinity field method has no extra parameters during inference, and is also less sensitive to input appearance changes. Extensive evaluations on Cityscapes, PASCAL VOC 2012 and GTA5 datasets demonstrate AAF provides an effective, efficient, and robust solution for semantic segmentation.

Related Material


[pdf]
[bibtex]
@InProceedings{Ke_2018_ECCV,
author = {Ke, Tsung-Wei and Hwang, Jyh-Jing and Liu, Ziwei and Yu, Stella X.},
title = {Adaptive Affinity Fields for Semantic Segmentation},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}