Universal Representation Learning From Multiple Domains for Few-Shot Classification

Wei-Hong Li, Xialei Liu, Hakan Bilen; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9526-9535

Abstract


In this paper, we look at the problem of few-shot image classification that aims to learn a classifier for previously unseen classes and domains from few labeled samples. Recent methods use various adaptation strategies for aligning their visual representations to new domains or select the relevant ones from multiple domain-specific feature extractors. In this work, we present URL, which learns a single set of universal visual representations by distilling knowledge of multiple domain-specific networks after co-aligning their features with the help of adapters and centered kernel alignment. We show that the universal representations can be further refined for previously unseen domains by an efficient adaptation step in a similar spirit to distance learning methods. We rigorously evaluate our model in the recent Meta-Dataset benchmark and demonstrate that it significantly outperforms the previous methods while being more efficient.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Li_2021_ICCV, author = {Li, Wei-Hong and Liu, Xialei and Bilen, Hakan}, title = {Universal Representation Learning From Multiple Domains for Few-Shot Classification}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {9526-9535} }