Asymmetric Metric Learning for Knowledge Transfer

Mateusz Budnik, Yannis Avrithis; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 8228-8238

Abstract


Knowledge transfer from large teacher models to smaller student models has recently been studied for metric learning, focusing on fine-grained classification. In this work, focusing on instance-level image retrieval, we study an asymmetric testing task, where the database is represented by the teacher and queries by the student. Inspired by this task, we introduce asymmetric metric learning, a novel paradigm of using asymmetric representations at training. This acts as a simple combination of knowledge transfer with the original metric learning task. We systematically evaluate different teacher and student models, metric learning and knowledge transfer loss functions on the new asymmetric testing as well as the standard symmetric testing task, where database and queries are represented by the same model. We find that plain regression is surprisingly effective compared to more complex knowledge transfer mechanisms, working best in asymmetric testing. Interestingly, our asymmetric metric learning approach works best in symmetric testing, allowing the student to even outperform the teacher. Our implementation is publicly available, including trained student models for all loss functions and all pairs of teacher/student models. This can serve as a benchmark for future research.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Budnik_2021_CVPR, author = {Budnik, Mateusz and Avrithis, Yannis}, title = {Asymmetric Metric Learning for Knowledge Transfer}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {8228-8238} }