Visual Interpretability Analysis of Deep CNNs Using an Adaptive Threshold Method on Diabetic Retinopathy Images

George Ioannou, Tasos Papagiannis, Thanos Tagaris, Georgios Alexandridis, Andreas Stafylopatis; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 480-486

Abstract


Deep neural networks have been dominating the field of computer vision, achieving exceptional performance on object detection and pattern recognition. However, despite the highly accurate predictions of these models, the continuous increase in depth and complexity comes at the cost of interpretability, making the task of explaining the reasoning behind these predictions very challenging. In this paper, an analysis of state-of-the-art approaches towards the direction of interpreting the networks' representations, is carried out over two Diabetic Retinopathy image datasets, IDRiD and DDR. Furthermore, these techniques are compared in the task of image segmentation of the same datasets. This is to discover which method can produce the better attention maps that can solve the problem of segmentation without actually training the network for the specific task. To accomplish that we propose an adaptive threshold method that transforms the attention masks in a more suitable representation for segmentation. Experiments over multiple architectures were conducted to ensure the robustness of the results.

Related Material


[pdf]
[bibtex]
@InProceedings{Ioannou_2021_ICCV, author = {Ioannou, George and Papagiannis, Tasos and Tagaris, Thanos and Alexandridis, Georgios and Stafylopatis, Andreas}, title = {Visual Interpretability Analysis of Deep CNNs Using an Adaptive Threshold Method on Diabetic Retinopathy Images}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {480-486} }