-
[pdf]
[bibtex]@InProceedings{Rhodes_2024_CVPR, author = {Rhodes, Anthony and Bian, Yali and Demir, Ilke}, title = {Quantifying Explainability with Multi-Scale Gaussian Mixture Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {8223-8228} }
Quantifying Explainability with Multi-Scale Gaussian Mixture Models
Abstract
With the increasing complexity and influence of machine learning models the development of model explanation techniques has recently gained significant attention giving rise to the field of Explainable Artificial Intelligence (XAI). Although there exists vast literature on XAI methods they are usually compared with human evaluations model-dependent metrics or distribution shifts. In the present work we introduce a novel explainability comparison metric eXplainable Multi-Scale Gmm Distance (XMGD). XMGD provides a principled probabilistic framework for analyzing and quantifying any model or dataset similarity through the lens of explainability. Through experimental results we demonstrate several critical advantages of XMGD over alternative saliency comparison metrics including improved robustness and the ability of XMGD to illuminate fine-grain saliency comparison distinctions.
Related Material