Disentangling Neuron Representations With Concept Vectors

Laura O'Mahony, Vincent Andrearczyk, Henning Müller, Mara Graziani; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 3770-3775

Abstract


Mechanistic interpretability aims to understand how models store representations by breaking down neural networks into interpretable units. However, the occurrence of polysemantic neurons, or neurons that respond to multiple unrelated features, makes interpreting individual neurons challenging. This has led to the search for meaningful vectors, known as concept vectors, in activation space instead of individual neurons. The main contribution of this paper is a method to disentangle polysemantic neurons into concept vectors encapsulating distinct features. Our method can search for fine-grained concepts according to the user's desired level of concept separation. The analysis shows that polysemantic neurons can be disentangled into directions consisting of linear combinations of neurons. Our evaluations show that the concept vectors found encode coherent, human-understandable features.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{O'Mahony_2023_CVPR, author = {O'Mahony, Laura and Andrearczyk, Vincent and M\"uller, Henning and Graziani, Mara}, title = {Disentangling Neuron Representations With Concept Vectors}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {3770-3775} }