Text-to-Image Models for Counterfactual Explanations: A Black-Box Approach

Guillaume Jeanneret, Loïc Simon, Frédéric Jurie; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 4757-4767

Abstract


This paper addresses the challenge of generating Counterfactual Explanations (CEs), involving the identification and modification of the fewest necessary features to alter a classifier's prediction for a given image. Our proposed method, Text-to-Image Models for Counterfactual Explanations (TIME), is a black-box counterfactual technique based on distillation. Unlike previous methods, this approach requires solely the image and its prediction, omitting the need for the classifier's structure, parameters, or gradients. Before generating the counterfactuals, TIME introduces two distinct biases into Stable Diffusion in the form of textual embeddings: the context bias, associated with the image's structure, and the class bias, linked to class-specific features learned by the target classifier. After learning these biases, we find the optimal latent code applying the classifier's predicted class token and regenerate the image using the target embedding as conditioning, producing the counterfactual explanation. Extensive empirical studies validate that TIME can generate explanations of comparable effectiveness even when operating within a black-box setting.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Jeanneret_2024_WACV, author = {Jeanneret, Guillaume and Simon, Lo{\"\i}c and Jurie, Fr\'ed\'eric}, title = {Text-to-Image Models for Counterfactual Explanations: A Black-Box Approach}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {4757-4767} }