Saliency Detection via Dense and Sparse Reconstruction

Xiaohui Li, Huchuan Lu, Lihe Zhang, Xiang Ruan, Ming-Hsuan Yang; The IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2976-2983

Abstract


In this paper, we propose a visual saliency detection algorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as likely cues for background templates, from which dense and sparse appearance models are constructed. For each image region, we first compute dense and sparse reconstruction errors. Second, the reconstruction errors are propagated based on the contexts obtained from K-means clustering. Third, pixel-level saliency is computed by an integration of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2013_ICCV,
author = {Li, Xiaohui and Lu, Huchuan and Zhang, Lihe and Ruan, Xiang and Yang, Ming-Hsuan},
title = {Saliency Detection via Dense and Sparse Reconstruction},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}