Training Deep Networks to Be Spatially Sensitive
Nicholas Kolkin, Eli Shechtman, Gregory Shakhnarovich; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5668-5677
Abstract
In many computer vision tasks, for example saliency prediction or semantic segmentation, the desired output is a foreground map that predicts pixels where some criteria is satisfied. Despite the inherently spatial nature of this task commonly used learning objectives do not incorporate the spatial relationships between misclassified pixels and the underlying ground truth. The Weighted F-measure, a recently proposed evaluation metric, does reweight errors spatially, and has been shown to closely correlate with human evaluation of quality, and stably rank predictions with respect to noisy ground truths (such as a sloppy human annotator might generate). However it suffers from computational complexity which makes it intractable as an optimization objective for gradient descent, which must be evaluated thousands or millions of times while learning a model's parameters. We propose a differentiable and efficient approximation of this metric. By incorporating spatial information into the objective we can use a simpler model than competing methods without sacrificing accuracy, resulting in faster inference speeds and alleviating the need for pre/post-processing. We match (or improve) performance on several tasks compared to prior state of the art by traditional metrics, and in many cases significantly improve performance by the weighted F-measure.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Kolkin_2017_ICCV,
author = {Kolkin, Nicholas and Shechtman, Eli and Shakhnarovich, Gregory},
title = {Training Deep Networks to Be Spatially Sensitive},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}