DeepCD: Learning Deep Complementary Descriptors for Patch Representations

Tsun-Yi Yang, Jo-Han Hsu, Yen-Yu Lin, Yung-Yu Chuang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3314-3322

Abstract


This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for a patch by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method is simple yet effective, outperforming state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2017_ICCV,
author = {Yang, Tsun-Yi and Hsu, Jo-Han and Lin, Yen-Yu and Chuang, Yung-Yu},
title = {DeepCD: Learning Deep Complementary Descriptors for Patch Representations},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}