Shot in the Dark: Few-Shot Learning With No Base-Class Labels

Zitian Chen, Subhransu Maji, Erik Learned-Miller; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 2668-2677

Abstract


Few-shot learning aims to build classifiers for new classes from a small number of labeled examples and is commonly facilitated by access to examples from a distinct set of 'base classes'. The difference in data distribution between the test set(novel classes) and the base classes used to learn an inductive bias often results in poor generalization on the novel classes. To alleviate problems caused by the distribution shift, previous research has explored the use of unlabeled examples from the novel classes, in addition to labeled examples of the base classes, which is known as the transductive setting. In this work, we show that, surprisingly, off-the-shelf self-supervised learn-ing outperforms transductive few-shot methods by 3.9% for 5-shot accuracy onminiImageNetwithout using any base class labels. This motivates us to examine more carefully the role of features learned through self-supervision in few-shot learning. Comprehensive experiments are conducted to compare the transferability, robustness, efficiency, and the complementarity of supervised and self-supervised features.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chen_2021_CVPR, author = {Chen, Zitian and Maji, Subhransu and Learned-Miller, Erik}, title = {Shot in the Dark: Few-Shot Learning With No Base-Class Labels}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2668-2677} }