-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Ma_2021_ICCV, author = {Ma, Jiawei and Xie, Hanchen and Han, Guangxing and Chang, Shih-Fu and Galstyan, Aram and Abd-Almageed, Wael}, title = {Partner-Assisted Learning for Few-Shot Image Classification}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {10573-10582} }
Partner-Assisted Learning for Few-Shot Image Classification
Abstract
Few-shot Learning has been studied to mimic human visual capabilities and learn effective models without the need of exhaustive human annotation. Even though the idea of meta-learning for adaptation has dominated the few-shot learning methods, how to train a feature extractor is still a challenge. In this paper, we focus on the design of training strategy to obtain an elemental representation such that the prototype of each novel class can be estimated from a few labeled samples. We propose a two-stage training scheme, Partner-Assisted Learning (PAL), which first trains a partner encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a main encoder by aligning its outputs with soft-anchors while attempting to maximize classification performance. Two alignment constraints from logit-level and feature-level are designed individually. For each few-shot task, we perform prototype classification. Our method consistently outperforms the state-of-the-art method on four benchmarks. Detailed ablation studies of PAL are provided to justify the selection of each component involved in training.
Related Material