Correcting the Triplet Selection Bias for Triplet Loss

Baosheng Yu, Tongliang Liu, Mingming Gong, Changxing Ding, Dacheng Tao; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 71-87


Triplet loss, popular for metric learning, has made a great success in many computer vision tasks, such as fine-grained image classification, image retrieval, and face recognition. Considering that the number of triplets grows cubically with the size of training data, triplet mining is thus indispensable for efficiently training with triplet loss. However, in practice, the training is usually very sensitive to the selected triplets, e.g., it almost does not converge with randomly selected triplets and selecting hardest triplets also leads to bad local minima. We argue that the bias in sampling of triplets degrades the performance of learning with triplet loss. In this paper, we propose a new variant of triplet loss, which tries to reduce the bias in triplet sampling by adaptively correcting the distribution shift on sampled triplets. We refer to this new triplet loss as adapted triplet loss. We conduct a number of experiments on MNIST and Fashion-MNIST for image classification, and on CARS196, CUB200-2011, and Stanford Online Products for image retrieval. The experimental results demonstrate the effectiveness of the proposed method.

Related Material

author = {Yu, Baosheng and Liu, Tongliang and Gong, Mingming and Ding, Changxing and Tao, Dacheng},
title = {Correcting the Triplet Selection Bias for Triplet Loss},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}