Learning Unified Embedding for Apparel Recognition

Yang Song, Yuan Li, Bo Wu, Chao-Yeh Chen, Xiao Zhang, Hartwig Adam; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2243-2246

Abstract


In apparel recognition, deep neural network models are often trained separately for different verticals. However, using specialized models for different verticals is not scalable and expensive to deploy. This paper addresses the problem of learning one unified embedding model for multiple object verticals (e.g. all apparel classes) without sacrificing accuracy. The problem is tackled from two aspects: training data and training difficulty. On the training data aspect, we figure out that for a single model trained with triplet loss, there is an accuracy sweet spot in terms of how many verticals are trained together. To ease the training difficulty, a novel learning scheme is proposed by using the output from specialized models as learning targets so that L2 loss can be used instead of triplet loss. This new loss makes the training easier and make it possible for more efficient use of the feature space. The end result is a unified model which can achieve the same retrieval accuracy as a number of separate specialized models, while having the model complexity as one.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Song_2017_ICCV,
author = {Song, Yang and Li, Yuan and Wu, Bo and Chen, Chao-Yeh and Zhang, Xiao and Adam, Hartwig},
title = {Learning Unified Embedding for Apparel Recognition},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}