Hard-Aware Deeply Cascaded Embedding

Yuhui Yuan, Kuiyuan Yang, Chao Zhang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 814-823

Abstract


Riding on the waves of deep neural networks, deep metric learning has achieved promising results in various tasks by using triplet network or Siamese network. Though the basic goal of making images from the same category closer than the ones from different categories is intuitive, it is hard to optimize the objective directly due to the quadratic or cubic sample size. Hard example mining is widely used to solve the problem, which spends the expensive computation on a subset of samples that are considered hard. However, hard is defined relative to a specific model. Then complex models will treat most samples as easy ones and vice versa for simple models, both of which are not good for training. It is difficult to define a model with the just right complexity and choose hard examples adequately as different samples are of diverse hard levels. This motivates us to propose the novel framework named Hard-Aware Deeply Cascaded Embedding(HDC) to ensemble a set of models with different complexities in cascaded manner to mine hard examples at multiple levels. A sample is judged by a series of models with increasing complexities and only updates models that consider the sample as a hard case. The HDC is evaluated on CARS196, CUB-200-2011, Stanford Online Products, VehicleID and DeepFashion datasets, and outperforms state-of-the-art methods by a large margin.

Related Material


[pdf] [arXiv] [video]
[bibtex]
@InProceedings{Yuan_2017_ICCV,
author = {Yuan, Yuhui and Yang, Kuiyuan and Zhang, Chao},
title = {Hard-Aware Deeply Cascaded Embedding},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}