Directing DNNs Attention for Facial Attribution Classification using Gradient-weighted Class Activation Mapping

Xi Yang, Bojian Wu, Issei Sato, Takeo Igarashi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 103-106

Abstract


Deep neural networks (DNNs) have a high accuracy on image classification tasks. However, DNNs trained by such dataset with co-occurrence bias may rely on wrong features while making decisions for classification. It will greatly affect the transferability of pre-trained DNNs. In this paper, we propose an interactive method to direct classifiers paying attentions to the regions that are manually specified by the users, in order to mitigate the influence of co-occurrence bias. We test on CelebA dataset, the pre-trained AlexNet is fine-tuned to focus on the specific facial attributes based on the results of Grad-CAM.

Related Material


[pdf] [dataset]
[bibtex]
@InProceedings{Yang_2019_CVPR_Workshops,
author = {Yang, Xi and Wu, Bojian and Sato, Issei and Igarashi, Takeo},
title = {Directing DNNs Attention for Facial Attribution Classification using Gradient-weighted Class Activation Mapping},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}