-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Wiles_2021_CVPR, author = {Wiles, Olivia and Ehrhardt, Sebastien and Zisserman, Andrew}, title = {Co-Attention for Conditioned Image Matching}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {15920-15929} }
Co-Attention for Conditioned Image Matching
Abstract
We propose a new approach to determine correspondences between image pairs in the wild under large changes in illumination, viewpoint, context, and material. While other approaches find correspondences between pairs of images by treating the images independently, we instead condition on both images to implicitly take account of the differences between them. To achieve this, we introduce (i) a spatial attention mechanism (a co-attention module, CoAM) for conditioning the learned features on both images, and (ii) a distinctiveness score used to choose the best matches at test time. CoAM can be added to standard architectures and trained using self-supervision or supervised data, and achieves a significant performance improvement under hard conditions, e.g. large viewpoint changes. We demonstrate that models using CoAM achieve state-of-the-art or competitive results on a wide range of tasks: local matching, camera localization, 3D reconstruction, and image stylization.
Related Material