Joint Learning of Saliency Detection and Weakly Supervised Semantic Segmentation

Yu Zeng, Yunzhi Zhuge, Huchuan Lu, Lihe Zhang; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 7223-7233

Abstract


Existing weakly supervised semantic segmentation (WSSS) methods usually utilize the results of pre-trained saliency detection (SD) models without explicitly modelling the connections between the two tasks, which is not the most efficient configuration. Here we propose a unified multi-task learning framework to jointly solve WSSS and SD using a single network, i.e. saliency and segmentation network (SSNet). SSNet consists of a segmentation network (SN) and a saliency aggregation module (SAM). For an input image, SN generates the segmentation result and, SAM predicts the saliency of each category and aggregating the segmentation masks of all categories into a saliency map. The proposed network is trained end-to-end with image-level category labels and class-agnostic pixel-level saliency labels. Experiments on PASCAL VOC 2012 segmentation dataset and four saliency benchmark datasets show the performance of our method compares favorably against state-of-the-art weakly supervised segmentation methods and fully supervised saliency detection methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Zeng_2019_ICCV,
author = {Zeng, Yu and Zhuge, Yunzhi and Lu, Huchuan and Zhang, Lihe},
title = {Joint Learning of Saliency Detection and Weakly Supervised Semantic Segmentation},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}