BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation

Jifeng Dai, Kaiming He, Jian Sun; The IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1635-1643

Abstract


Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called "BoxSup", produces competitive results (e.g., 62.0% mAP for validation) supervised by boxes only, on par with strong baselines (e.g., 63.8% mAP) fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT.

Related Material


[pdf]
[bibtex]
@InProceedings{Dai_2015_ICCV,
author = {Dai, Jifeng and He, Kaiming and Sun, Jian},
title = {BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}