MultiSeg: Semantically Meaningful, Scale-Diverse Segmentations From Minimal User Input

Jun Hao Liew, Scott Cohen, Brian Price, Long Mai, Sim-Heng Ong, Jiashi Feng; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 662-670

Abstract


Existing deep learning-based interactive image segmentation approaches typically assume the target-of-interest is always a single object and fail to account for the potential diversity in user expectations, thus requiring excessive user input when it comes to segmenting an object part or a group of objects instead. Motivated by the observation that the object part, full object, and a collection of objects essentially differ in size, we propose a new concept called scale-diversity, which characterizes the spectrum of segmentations w.r.t. different scales. To address this, we present MultiSeg, a scale-diverse interactive image segmentation network that incorporates a set of two-dimensional scale priors into the model to generate a set of scale-varying proposals that conform to the user input. We explicitly encourage segmentation diversity during training by synthesizing diverse training samples for a given image. As a result, our method allows the user to quickly locate the closest segmentation target for further refinement if necessary. Despite its simplicity, experimental results demonstrate that our proposed model is capable of quickly producing diverse yet plausible segmentation outputs, reducing the user interaction required, especially in cases where many types of segmentations (object parts or groups) are expected.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Liew_2019_ICCV,
author = {Liew, Jun Hao and Cohen, Scott and Price, Brian and Mai, Long and Ong, Sim-Heng and Feng, Jiashi},
title = {MultiSeg: Semantically Meaningful, Scale-Diverse Segmentations From Minimal User Input},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}