Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective

Jing Zhang, Tong Zhang, Yuchao Dai, Mehrtash Harandi, Richard Hartley; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 9029-9038


The success of current deep saliency detection methods heavily depends on the availability of large-scale supervision in the form of per-pixel labeling. Such supervision, while labor-intensive and not always possible, tends to hinder the generalization ability of the learned models. By contrast, traditional handcrafted features based unsupervised saliency detection methods, even though have been surpassed by the deep supervised methods, are generally dataset-independent and could be applied in the wild. This raises a natural question that ``Is it possible to learn saliency maps without using labeled data while improving the generalization ability?''. To this end, we present a novel perspective to unsupervised saliency detection through learning from multiple noisy labeling generated by ``weak'' and ``noisy'' unsupervised handcrafted saliency methods. Our end-to-end deep learning framework for unsupervised saliency detection consists of a latent saliency prediction module and a noise modeling module that work collaboratively and are optimized jointly. Explicit noise modeling enables us to deal with noisy saliency maps in a probabilistic way. Extensive experimental results on various benchmarking datasets show that our model not only outperforms all the unsupervised saliency methods with a large margin but also achieves comparable performance with the recent state-of-the-art supervised deep saliency methods.

Related Material

[pdf] [arXiv] [video]
author = {Zhang, Jing and Zhang, Tong and Dai, Yuchao and Harandi, Mehrtash and Hartley, Richard},
title = {Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}