Better and Faster: Exponential Loss for Image Patch Matching

Shuang Wang, Yanfeng Li, Xuefeng Liang, Dou Quan, Bowu Yang, Shaowei Wei, Licheng Jiao; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4812-4821

Abstract


Recent studies on image patch matching are paying more attention on hard sample learning, because easy samples do not contribute much to the network optimization. They have proposed various hard negative sample mining strategies, but very few addressed this problem from the perspective of loss functions. Our research shows that the conventional Siamese and triplet losses treat all samples linearly, thus make the training time consuming. Instead, we propose the exponential Siamese and triplet losses, which can naturally focus more on hard samples and put less emphasis on easy ones, meanwhile, speed up the optimization. To assist the exponential losses, we introduce the hard positive sample mining to further enhance the effectiveness. The extensive experiments demonstrate our proposal improves both metric and descriptor learning on several well accepted benchmarks, and outperforms the state-of-the-arts on the UBC dataset. Moreover, it also shows a better generalizability on cross-spectral image matching and image retrieval tasks.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wang_2019_ICCV,
author = {Wang, Shuang and Li, Yanfeng and Liang, Xuefeng and Quan, Dou and Yang, Bowu and Wei, Shaowei and Jiao, Licheng},
title = {Better and Faster: Exponential Loss for Image Patch Matching},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}