Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation

Suyog Dutt Jain, Kristen Grauman; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1313-1320

Abstract


The mode of manual annotation used in an interactive segmentation algorithm affects both its accuracy and easeof-use. For example, bounding boxes are fast to supply, yet may be too coarse to get good results on difficult images; freehand outlines are slower to supply and more specific, yet they may be overkill for simple images. Whereas existing methods assume a fixed form of input no matter the image, we propose to predict the tradeoff between accuracy and effort. Our approach learns whether a graph cuts segmentation will succeed if initialized with a given annotation mode, based on the image's visual separability and foreground uncertainty. Using these predictions, we optimize the mode of input requested on new images a user wants segmented. Whether given a single image that should be segmented as quickly as possible, or a batch of images that must be segmented within a specified time budget, we show how to select the easiest modality that will be sufficiently strong to yield high quality segmentations. Extensive results with real users and three datasets demonstrate the impact.

Related Material


[pdf]
[bibtex]
@InProceedings{Jain_2013_ICCV,
author = {Jain, Suyog Dutt and Grauman, Kristen},
title = {Predicting Sufficient Annotation Strength for Interactive Foreground Segmentation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}