Few-Shot Zero-Shot Learning: Knowledge Transfer with Less Supervision

Nanyi Fei, Jiechao Guan, Zhiwu Lu, Yizhao Gao; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


Existing zero-shot learning (ZSL) methods assume that there exist sufficient training samples from seen classes, each annotated with semantic descriptors such as attributes, for knowledge transfer to unseen classes without any training samples. However, this assumption is often invalid because collecting sufficient seen class samples can be difficult and attribute annotation is expensive; it thus severely limits the scalability of ZSL. In this paper, we define a new setting termed Few-Shot Zero-Shot Learning (FSZSL), where only a few annotated images are collected from each seen class (i.e., few-shot). This is clearly more challenging yet more realistic than the conventional ZSL setting. To overcome the resultant image-level attribute sparsity, we propose a novel inductive ZSL model termed sparse attribute propagation (SAP) by propagating attribute annotations to more unannotated images using sparse coding. This is followed by learning bidirectional projections between features and attributes for ZSL. An efficient solver is provided for such knowledge transfer with less supervision, together with rigorous theoretic analysis. With our SAP, we show that a ZSL training dataset can also be augmented by the abundant web images returned by image search engine, to further improve the model performance. Extensive experiments show that the proposed model achieves state-of-the-art results.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Fei_2020_ACCV, author = {Fei, Nanyi and Guan, Jiechao and Lu, Zhiwu and Gao, Yizhao}, title = {Few-Shot Zero-Shot Learning: Knowledge Transfer with Less Supervision}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }