PARN: Position-Aware Relation Networks for Few-Shot Learning

Ziyang Wu, Yuwei Li, Lihua Guo, Kui Jia; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6659-6667

Abstract


Few-shot learning presents a challenge that a classifier must quickly adapt to new classes that do not appear in the training set, given only a few labeled examples of each new class. This paper proposes a position-aware relation network (PARN) to learn a more flexible and robust metric ability for few-shot learning. Relation networks (RNs), a kind of architectures for relational reasoning, can acquire a deep metric ability for images by just being designed as a simple convolutional neural network (CNN)[23]. However, due to the inherent local connectivity of CNN, the CNN-based relation network (RN) can be sensitive to the spatial position relationship of semantic objects in two compared images. To address this problem, we introduce a deformable feature extractor (DFE) to extract more efficient features, and design a dual correlation attention mechanism (DCA) to deal with its inherent local connectivity. Successfully, our proposed approach extents the potential of RN to be position-aware of semantic objects by introducing only a small number of parameters. We evaluate our approach on two major benchmark datasets, i.e., Omniglot and Mini-Imagenet, and on both of the datasets our approach achieves state-of-the-art performance. It's worth noting that our 5-way 1-shot result on Omniglot even outperforms the previous 5-way 5-shot results.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wu_2019_ICCV,
author = {Wu, Ziyang and Li, Yuwei and Guo, Lihua and Jia, Kui},
title = {PARN: Position-Aware Relation Networks for Few-Shot Learning},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}