Deep Variational Metric Learning

Xudong Lin, Yueqi Duan, Qiyuan Dong, Jiwen Lu, Jie Zhou; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 689-704


Deep metric learning has been extensively explored recently, which trains a deep neural network to produce discriminative embedding features. Most existing methods usually enforce the model to be indiscriminating to intra-class variance, which makes the model over-fitting to the training set to minimize loss functions on these specific changes and leads to low generalization power on unseen classes. However, these methods ignore a fact that in the central latent space, the distribution of variance within classes is actually independent on classes. In this paper, we propose a deep variational metric learning (DVML) framework to explicitly model the intra-class variance and disentangle the intra-class invariance, namely, the class centers. With the learned distribution of intra-class variance, we can simultaneously generate discriminative samples to improve robustness. Our method is applicable to most of existing metric learning algorithms, and extensive experiments on three benchmark datasets including CUB-200-2011, Cars196 and Stanford Online Products show that our DVML significantly boosts the performance of currently popular deep metric learning methods.

Related Material

author = {Lin, Xudong and Duan, Yueqi and Dong, Qiyuan and Lu, Jiwen and Zhou, Jie},
title = {Deep Variational Metric Learning},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}