ICON: Learning Regular Maps Through Inverse Consistency

Hastings Greer, Roland Kwitt, Fran├žois-Xavier Vialard, Marc Niethammer; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 3396-3405

Abstract


Learning maps between data samples is fundamental. Applications range from representation learning, image translation and generative modeling, to the estimation of spatial deformations. Such maps relate feature vectors, or map between feature spaces. Well-behaved maps should be regular, which can be imposed explicitly or may emanate from the data itself. We explore what induces regularity for spatial transformations, e.g., when computing image registrations. Classical optimization-based models compute maps between pairs of samples and rely on an appropriate regularizer for well-posedness. Recent deep learning approaches have attempted to avoid using such regularizers altogether by relying on the sample population instead. We explore if it is possible to obtain spatial regularity using an inverse consistency loss only and elucidate what explains map regularity in such a context. We find that deep networks combined with an inverse consistency loss and randomized off-grid interpolation yield well behaved, approximately diffeomorphic, spatial transformations. Despite the simplicity of this approach, our experiments present compelling evidence, on both synthetic and real data, that regular maps can be obtained without carefully tuned explicit regularizers and competitive registration performance.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Greer_2021_ICCV, author = {Greer, Hastings and Kwitt, Roland and Vialard, Fran\c{c}ois-Xavier and Niethammer, Marc}, title = {ICON: Learning Regular Maps Through Inverse Consistency}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {3396-3405} }