Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models

Matthew Kowal, Richard P. Wildes, Konstantinos G. Derpanis; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 10895-10905

Abstract


Understanding what deep network models capture in their learned representations is a fundamental challenge in computer vision. We present a new methodology to understanding such vision models the Visual Concept Connectome (VCC) which discovers human interpretable concepts and their interlayer connections in a fully unsupervised manner. Our approach simultaneously reveals fine-grained concepts at a layer connection weightings across all layers and is amendable to global analysis of network structure (e.g. branching pattern of hierarchical concept assemblies). Previous work yielded ways to extract interpretable concepts from single layers and examine their impact on classification but did not afford multilayer concept analysis across an entire network architecture. Quantitative and qualitative empirical results show the effectiveness of VCCs in the domain of image classification. Also we leverage VCCs for the application of failure mode debugging to reveal where mistakes arise in deep networks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kowal_2024_CVPR, author = {Kowal, Matthew and Wildes, Richard P. and Derpanis, Konstantinos G.}, title = {Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {10895-10905} }