Geometrized Transformer for Self-Supervised Homography Estimation

Jiazhen Liu, Xirong Li; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 9556-9565

Abstract


For homography estimation, we propose Geometrized Transformer (GeoFormer), a new detector-free feature matching method. Current detector-free methods, e.g. LoFTR, lack an effective mean to accurately localize small and thus computationally feasible regions for cross-attention diffusion. We resolve the challenge with an extremely simple idea: using the classical RANSAC geometry for attentive region search. Given coarse matches by LoFTR, a homography is obtained with ease. Such a homography allows us to compute cross-attention in a focused manner, where key/value sets required by Transformers can be reduced to small fix-sized regions rather than an entire image. Local features can thus be enhanced by standard Transformers. We integrate GeoFormer into the LoFTR framework. By minimizing a multi-scale cross-entropy based matching loss on auto-generated training data, the network is trained in a fully self-supervised manner. Extensive experiments are conducted on multiple real-world datasets covering natural images, heavily manipulated pictures and retinal images. The proposed method compares favorably against the state-of-the-art.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Liu_2023_ICCV, author = {Liu, Jiazhen and Li, Xirong}, title = {Geometrized Transformer for Self-Supervised Homography Estimation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {9556-9565} }