Contrastive Knowledge-Augmented Meta-Learning for Few-Shot Classification

Rakshith Subramanyam, Mark Heimann, T.S. Jayram, Rushil Anirudh, Jayaraman J. Thiagarajan; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 2479-2487

Abstract


Model agnostic meta-learning algorithms aim to infer priors from several observed tasks that can then be used to adapt to a new task with few examples. Given the inherent diversity of tasks arising in existing benchmarks, recent methods have resorted to task-specific adaptation of the prior. Our goal is to improve generalization of meta learners when the task distribution contains challenging distribution shifts and semantic disparities. To this end, we introduce CAML (Contrastive Knowledge-Augmented Meta Learning), a knowledge-enhanced few-shot learning approach that evolves a knowledge graph to encode historical experience, and employs a contrastive distillation strategy to leverage the encoded knowledge for task-aware modulation of the base learner. In addition to the standard few-shot task adaptation, we also consider the more challenging multi-domain task adaptation and few-shot dataset generalization settings in our evaluation with standard benchmarks. Our empirical study shows that CAML (i) enables simple task encoding schemes; (ii) eliminates the need for knowledge extraction at inference time; and most importantly, (iii) effectively aggregates historical experience thus leading to improved performance in both multi-domain adaptation and dataset generalization.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Subramanyam_2023_WACV, author = {Subramanyam, Rakshith and Heimann, Mark and Jayram, T.S. and Anirudh, Rushil and Thiagarajan, Jayaraman J.}, title = {Contrastive Knowledge-Augmented Meta-Learning for Few-Shot Classification}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {2479-2487} }