A Weakly Supervised Adaptive Triplet Loss for Deep Metric Learning

Xiaonan Zhao, Huan Qi, Rui Luo, Larry Davis; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


We address the problem of distance metric learning in visual similarity search, defined as learning an image embedding model which projects images into Euclidean space where semantically and visually similar images are closer and dissimilar images are further from one another. We present a weakly supervised adaptive triplet loss (ATL) capable of capturing fine-grained semantic similarity that encourages the learned image embedding models to generalize well on cross-domain data. The method uses weakly labeled product description data to implicitly determine fine grained semantic classes, avoiding the need to annotate large amounts of training data. We evaluate on the Amazon fashion retrieval benchmark and DeepFashion in-shop retrieval data. The method boosts the performance of triplet loss baseline by 10.6% on cross-domain data and out-performs the state-of-art model on all evaluation metrics.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhao_2019_ICCV,
author = {Zhao, Xiaonan and Qi, Huan and Luo, Rui and Davis, Larry},
title = {A Weakly Supervised Adaptive Triplet Loss for Deep Metric Learning},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}