Employing Deep Part-Object Relationships for Salient Object Detection

Yi Liu, Qiang Zhang, Dingwen Zhang, Jungong Han; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 1232-1241

Abstract


Despite Convolutional Neural Networks (CNNs) based methods have been successful in detecting salient objects, their underlying mechanism that decides the salient intensity of each image part separately cannot avoid inconsistency of parts within the same salient object. This would ultimately result in an incomplete shape of the detected salient object. To solve this problem, we dig into part-object relationships and take the unprecedented attempt to employ these relationships endowed by the Capsule Network (CapsNet) for salient object detection. The entire salient object detection system is built directly on a Two-Stream Part-Object Assignment Network (TSPOANet) consisting of three algorithmic steps. In the first step, the learned deep feature maps of the input image are transformed to a group of primary capsules. In the second step, we feed the primary capsules into two identical streams, within each of which low-level capsules (parts) will be assigned to their familiar high-level capsules (object) via a locally connected routing. In the final step, the two streams are integrated in the form of a fully connected layer, where the relevant parts can be clustered together to form a complete salient object. Experimental results demonstrate the superiority of the proposed salient object detection network over the state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2019_ICCV,
author = {Liu, Yi and Zhang, Qiang and Zhang, Dingwen and Han, Jungong},
title = {Employing Deep Part-Object Relationships for Salient Object Detection},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}