Generalized Zero-Shot Recognition Based on Visually Semantic Embedding

Pengkai Zhu, Hanxiao Wang, Venkatesh Saligrama; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2995-3003

Abstract


We propose a novel Generalized Zero-Shot learning (GZSL) method that is agnostic to both unseen images and unseen semantic vectors during training. Prior works in this context propose to map high-dimensional visual features to the semantic domain, which we believe contributes to the semantic gap. To bridge the gap, we propose a novel low-dimensional embedding of visual instances that is "visually semantic." Analogous to semantic data that quantifies the existence of an attribute in the presented instance, components of our visual embedding quantifies existence of a prototypical part-type in the presented instance. In parallel, as a thought experiment, we quantify the impact of noisy semantic data by utilizing a novel visual oracle to visually supervise a learner. These factors, namely semantic noise, visual-semantic gap and label noise lead us to propose a new graphical model for inference with pairwise interactions between label, semantic data, and inputs. We tabulate results on a number of benchmark datasets demonstrating significant improvement in accuracy over state-of-art under both semantic and visual supervision.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhu_2019_CVPR,
author = {Zhu, Pengkai and Wang, Hanxiao and Saligrama, Venkatesh},
title = {Generalized Zero-Shot Recognition Based on Visually Semantic Embedding},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}