SCNet: Learning Semantic Correspondence

Kai Han, Rafael S. Rezende, Bumsub Ham, Kwan-Yee K. Wong, Minsu Cho, Cordelia Schmid, Jean Ponce; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1831-1840

Abstract


This paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regularizer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially outperforms both recent deep learning architectures and previous methods based on hand-crafted features.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Han_2017_ICCV,
author = {Han, Kai and Rezende, Rafael S. and Ham, Bumsub and Wong, Kwan-Yee K. and Cho, Minsu and Schmid, Cordelia and Ponce, Jean},
title = {SCNet: Learning Semantic Correspondence},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}