Attentive Semantic Preservation Network for Zero-Shot Learning

Ziqian Lu, Yunlong Yu, Zhe-Ming Lu, Feng-Li Shen, Zhongfei Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 682-683


While promising progress has been achieved in the Zero-Shot Learning (ZSL) task , the existing generated approaches still suffer from overly plain pseudo features, resulting in poor discrimination of the generated visual features. To improve the quality of the generated features, we propose a novel Attentive Semantic Preservation Network (ASPN) to encode more discriminative as well as semantic-related information into the generated features with the category self-attention cues. Specifically, the feature generation and the semantic inference modules are formulated into a unified process to promote each other, which can effectively align the crossmodality semantic relation. The category attentive strategy encourages model to focus more on intrinsic information of the noisy generated features to alleviate the confusion of generated features. Moreover, prototype-based classification mechanism is introduced in an efficient way of leveraging known semantic information to further boost discriminative of the generated features. Experiments on four popular benchmarks, i.e., AWA1, AWA2, CUB, and FLO verify that our proposed approach outperforms state-of-the-art methods with obvious improvements under both the Traditional ZSL (TZSL) and the Generalized ZSL (GZSL) settings.

Related Material

author = {Lu, Ziqian and Yu, Yunlong and Lu, Zhe-Ming and Shen, Feng-Li and Zhang, Zhongfei},
title = {Attentive Semantic Preservation Network for Zero-Shot Learning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}