An Embarrassingly Simple Baseline to One-Shot Learning

Chen Liu, Chengming Xu, Yikai Wang, Li Zhang, Yanwei Fu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 922-923

Abstract


In this paper, we propose an embarrassingly simple approach for one-shot learning. Our insight is that the one-shot tasks have domain gap to the network pretrained tasks and thus some features from the pretrained network are not relevant, or harmful to the specific one-shot task. Therefore, we propose to directly prune the features from the pretrained network for a specific one-shot task rather than update it via an optimized scheme with complex network structure. Without bells and whistles, our simple yet effective method achieves leading performances on miniImageNet (60.63%) and tieredImageNet (69.02%) for 5-way one-shot setting. The best trial can hit to 66.83% on miniImageNet and 74.04% on tieredImageNet, establishing a new state-of-the-art. We strongly advocate that our method can serve as a strong baseline for one-shot learning. The codes and trained models will be released at http://github.com/corwinliu9669/embarrassingly-simple-baseline.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2020_CVPR_Workshops,
author = {Liu, Chen and Xu, Chengming and Wang, Yikai and Zhang, Li and Fu, Yanwei},
title = {An Embarrassingly Simple Baseline to One-Shot Learning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}