-
[pdf]
[arXiv]
[bibtex]@InProceedings{Vasa_2025_WACV, author = {Vasa, Vamsi Krishna S and Qiu, Peijie and Zhu, Wenhui and Xiong, Yujian and Dumitrascu, Oana and Wang, Yalin}, title = {Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {4016-4025} }
Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement
Abstract
Retinal fundus photography offers a non-invasive way to diagnose and monitor a variety of retinal diseases but is prone to inherent quality glitches arising from systemic imperfections or operator/patient-related factors. However high-quality retinal images are crucial for carrying out accurate diagnoses and automated analyses. The fundus image enhancement is typically formulated as a distribution alignment problem by finding a one-to-one mapping between a low-quality image and its high-quality counterpart. This paper proposes a context-informed optimal transport (OT) learning framework for tackling unpaired fundus image enhancement. In contrast to standard generative image enhancement methods which struggle with handling contextual information (e.g. over-tampered local structures and unwanted artifacts) the proposed context-aware OT learning paradigm better preserves local structures and minimizes unwanted artifacts. Leveraging deep contextual features we derive the proposed context-aware OT using the earth mover's distance and show that the proposed context-OT has a solid theoretical guarantee. Experimental results on a large-scale dataset demonstrate the superiority of the proposed method over several state-of-the-art supervised and unsupervised methods in terms of signal-to-noise ratio structural similarity index as well as two downstream tasks. The code is available at https://github.com/Retinal-Research/Contextual-OT.
Related Material