Deep Extreme Cut: From Extreme Points to Object Segmentation

Kevis-Kokitsi Maninis, Sergi Caelles, Jordi Pont-Tuset, Luc Van Gool; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 616-625

Abstract


This paper explores the use of extreme points in an object (left-most, right-most, top, bottom pixels) as input to obtain precise object segmentation for images and videos. We do so by adding an extra channel to the image in the input of a convolutional neural network (CNN), which contains a Gaussian centered in each of the extreme points. The CNN learns to transform this information into a segmentation of an object that matches those extreme points. We demonstrate the usefulness of this approach for guided segmentation (grabcut-style), interactive segmentation, video object segmentation, and dense segmentation annotation. We show that we obtain the most precise results to date, also with less user input, in an extensive and varied selection of benchmarks and datasets. All our models and code are publicly available on http://www.vision.ee.ethz.ch/~cvlsegmentation/dextr.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Maninis_2018_CVPR,
author = {Maninis, Kevis-Kokitsi and Caelles, Sergi and Pont-Tuset, Jordi and Van Gool, Luc},
title = {Deep Extreme Cut: From Extreme Points to Object Segmentation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}