Attribute Attention for Semantic Disambiguation in Zero-Shot Learning

Yang Liu, Jishun Guo, Deng Cai, Xiaofei He; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6698-6707

Abstract


Zero-shot learning (ZSL) aims to accurately recognize unseen objects by learning mapping matrices that bridge the gap between visual information and semantic attributes. Previous works implicitly treat attributes equally in compatibility score while ignoring that they have different importance for discrimination, which leads to severe semantic ambiguity. Considering both low-level visual information and global class-level features that relate to this ambiguity, we propose a practical Latent Feature Guided Attribute Attention (LFGAA) framework to perform object-based attribute attention for semantic disambiguation. By distracting semantic activation in dimensions that cause ambiguity, our method outperforms existing state-of-the-art methods on AwA2, CUB and SUN datasets in both inductive and transductive settings.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2019_ICCV,
author = {Liu, Yang and Guo, Jishun and Cai, Deng and He, Xiaofei},
title = {Attribute Attention for Semantic Disambiguation in Zero-Shot Learning},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}