HOMO-Feature: Cross-Arbitrary-Modal Image Matching with Homomorphism of Organized Major Orientation

Chenzhong Gao, Wei Li, Desheng Weng; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 10538-10548

Abstract


An exploration of cross-arbitrary-modal image invariant feature extraction and matching is made, with a purely handcrafted full-chain algorithm, Homomorphism of Organized Major Orientation (HOMO), being proposed. Instead of using deep models to conduct data-driven black-box learning, we introduce a Major Orientation Map (MOM), effectively combating image modal differences. Considering rotation, scale, and texture diversities in cross-modal images, HOMO incorporates a novel, universally designed Generalized-Polar descriptor (GPolar) and a Multi-scale Strategy (MsS) to gain well-rounded capacities. HOMO achieves the best comprehensive performance in feature matching on several generally cross-modal datasets, challenging compared with a set of state-of-the-art methods including 7 traditional algorithms and 10 deep network models. A dataset named General Cross-modal Zone (GCZ) is proposed, which shows practical values. Codes with datasets are available at https://github.com/MrPingQi/HOMO_Feature_ImgMatching.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Gao_2025_ICCV, author = {Gao, Chenzhong and Li, Wei and Weng, Desheng}, title = {HOMO-Feature: Cross-Arbitrary-Modal Image Matching with Homomorphism of Organized Major Orientation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {10538-10548} }