Few-Shot Learning via Saliency-Guided Hallucination of Samples

Hongguang Zhang, Jing Zhang, Piotr Koniusz; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2770-2779

Abstract


Learning new concepts from a few of samples is a standard challenge in computer vision. The main directions to improve the learning ability of few-shot training models include (i) a robust similarity learning and (ii) generating or hallucinating additional data from the limited existing samples. In this paper, we follow the latter direction and present a novel data hallucination model. Currently, most datapoint generators contain a specialized network (i.e., GAN) tasked with hallucinating new datapoints, thus requiring large numbers of annotated data for their training in the first place. In this paper, we propose a novel less-costly hallucination method for few-shot learning which utilizes saliency maps. To this end, we employ a saliency network to obtain the foregrounds and backgrounds of available image samples and feed the resulting maps into a two-stream network to hallucinate datapoints directly in the feature space from viable foreground-background combinations. To the best of our knowledge, we are the first to leverage saliency maps for such a task and we demonstrate their usefulness in hallucinating additional datapoints for few-shot learning. Our proposed network achieves the state of the art on publicly available datasets.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhang_2019_CVPR,
author = {Zhang, Hongguang and Zhang, Jing and Koniusz, Piotr},
title = {Few-Shot Learning via Saliency-Guided Hallucination of Samples},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}