Jointly Aligning Millions of Images With Deep Penalised Reconstruction Congealing

Roberto Annunziata, Christos Sagonas, Jacques Cali; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 81-90

Abstract


Extrapolating fine-grained pixel-level correspondences in a fully unsupervised manner from a large set of misaligned images can benefit several computer vision and graphics problems, e.g. co-segmentation, super-resolution, image edit propagation, structure-from-motion, and 3D reconstruction. Several joint image alignment and congealing techniques have been proposed to tackle this problem, but robustness to initialisation, ability to scale to large datasets, and alignment accuracy seem to hamper their wide applicability. To overcome these limitations, we propose an unsupervised joint alignment method leveraging a densely fused spatial transformer network to estimate the warping parameters for each image and a low-capacity auto-encoder whose reconstruction error is used as an auxiliary measure of joint alignment. Experimental results on digits from multiple versions of MNIST (i.e., original, perturbed, affNIST and infiMNIST) and faces from LFW, show that our approach is capable of aligning millions of images with high accuracy and robustness to different levels and types of perturbation. Moreover, qualitative and quantitative results suggest that the proposed method outperforms state-of-the-art approaches both in terms of alignment quality and robustness to initialisation.

Related Material


[pdf]
[bibtex]
@InProceedings{Annunziata_2019_ICCV,
author = {Annunziata, Roberto and Sagonas, Christos and Cali, Jacques},
title = {Jointly Aligning Millions of Images With Deep Penalised Reconstruction Congealing},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}