Fusing Visual Saliency for Material Recognition

Lin Qi, Ying Xu, Xiaowei Shang, Junyu Dong; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 1965-1968

Abstract


Material recognition is researched in both computer vision and vision science fields. In this paper, we investigated how humans observe material images and found the eye fixation information improves the performance of material image classification models. We first collected eye-tracking data from human observers and used it to fine-tune a generative adversarial network for saliency prediction (SalGAN). We then fused the predicted saliency map with material images, and fed them to CNN models for material classification. The experiment results show that the classification accuracy is improved than those using original images. This indicates that human's visual cues could benefit computational models as priors.

Related Material


[pdf]
[bibtex]
@InProceedings{Qi_2018_CVPR_Workshops,
author = {Qi, Lin and Xu, Ying and Shang, Xiaowei and Dong, Junyu},
title = {Fusing Visual Saliency for Material Recognition},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}