Generating Visual Explanations from Deep Networks using Implicit Neural Representations

Michal Byra, Henrik Skibbe; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 3310-3319

Abstract


Explaining deep learning models in a way that humans can easily understand is essential for responsible artificial intelligence applications. Attribution methods constitute an important area of explainable deep learning. The attribution problem involves finding parts of the network's input that are the most responsible for the model's output. In this work we demonstrate that implicit neural representations (INRs) constitute a good framework for generating visual explanations. Firstly we utilize coordinate-based implicit networks to reformulate and extend the extremal perturbations technique and generate attribution masks. Experimental results confirm the usefulness of our method. For instance by proper conditioning of the implicit network we obtain attribution masks that are well-behaved with respect to the imposed area constraints. Secondly we present an iterative INR-based method that can be used to generate multiple non-overlapping attribution masks for the same image. We depict that a deep learning model may associate the image label with both the appearance of the object of interest as well as with areas and textures usually accompanying the object. Our study demonstrates that implicit networks are well-suited for the generation of attribution masks and can provide interesting insights about the performance of deep learning models.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Byra_2025_WACV, author = {Byra, Michal and Skibbe, Henrik}, title = {Generating Visual Explanations from Deep Networks using Implicit Neural Representations}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {3310-3319} }