PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits

Maximilian Dreyer, Erblina Purelku, Johanna Vielhaben, Wojciech Samek, Sebastian Lapuschkin; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 8212-8217

Abstract


The field of mechanistic interpretability aims to study the role of individual neurons in Deep Neural Networks. Single neurons however have the capability to act polysemantically and encode for multiple (unrelated) features which renders their interpretation difficult. We present a method for disentangling polysemanticity of any Deep Neural Network by decomposing a polysemantic neuron into multiple monosemantic "virtual" neurons. This is achieved by identifying the relevant sub-graph ("circuit") for each "pure" feature. We demonstrate how our approach allows us to find and disentangle various polysemantic units of ResNet models trained on ImageNet. While evaluating feature visualizations using CLIP our method effectively disentangles representations improving upon methods based on neuron activations. Our code is available at https://github.com/maxdreyer/PURE.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Dreyer_2024_CVPR, author = {Dreyer, Maximilian and Purelku, Erblina and Vielhaben, Johanna and Samek, Wojciech and Lapuschkin, Sebastian}, title = {PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {8212-8217} }