A Deep Step Pattern Representation for Multimodal Retinal Image Registration

Jimmy Addison Lee, Peng Liu, Jun Cheng, Huazhu Fu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 5077-5086

Abstract


This paper presents a novel feature-based method that is built upon a convolutional neural network (CNN) to learn the deep representation for multimodal retinal image registration. We coined the algorithm deep step patterns, in short DeepSPa. Most existing deep learning based methods require a set of manually labeled training data with known corresponding spatial transformations, which limits the size of training datasets. By contrast, our method is fully automatic and scale well to different image modalities with no human intervention. We generate feature classes from simple step patterns within patches of connecting edges formed by vascular junctions in multiple retinal imaging modalities. We leverage CNN to learn and optimize the input patches to be used for image registration. Spatial transformations are estimated based on the output possibility of the fully connected layer of CNN for a pair of images. One of the key advantages of the proposed algorithm is its robustness to non-linear intensity changes, which widely exist on retinal images due to the difference of acquisition modalities. We validate our algorithm on extensive challenging datasets comprising poor quality multimodal retinal images which are adversely affected by pathologies (diseases), speckle noise and low resolutions. The experimental results demonstrate the robustness and accuracy over state-of-the-art multimodal image registration algorithms.

Related Material


[pdf]
[bibtex]
@InProceedings{Lee_2019_ICCV,
author = {Lee, Jimmy Addison and Liu, Peng and Cheng, Jun and Fu, Huazhu},
title = {A Deep Step Pattern Representation for Multimodal Retinal Image Registration},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}