Diffusion Models for Counterfactual Explanations

Guillaume Jeanneret, Loic Simon, Frederic Jurie; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 858-876

Abstract


Counterfactual explanations have shown promising results as a post-hoc framework to make image classifiers more explainable. In this paper, we propose DiME, a method allowing the generation of counterfactual images using the recent diffusion models. By leveraging the guided generative diffusion process, our proposed methodology shows how to use the gradients of the target classifier to generate counterfactual explanations of input instances. Further, we analyze current approaches to evaluate spurious correlations and extend the evaluation measurements by proposing a new metric: Correlation Difference. Our experimental validations show that the proposed algorithm surpasses previous State-of-the-Art results on 5 out of 6 metrics on CelebA.

Related Material


[pdf] [arXiv] [code]
[bibtex]
@InProceedings{Jeanneret_2022_ACCV, author = {Jeanneret, Guillaume and Simon, Loic and Jurie, Frederic}, title = {Diffusion Models for Counterfactual Explanations}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {858-876} }