-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Shang_2024_CVPR, author = {Shang, Chenming and Zhou, Shiji and Zhang, Hengyuan and Ni, Xinzhe and Yang, Yujiu and Wang, Yuwang}, title = {Incremental Residual Concept Bottleneck Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {11030-11040} }
Incremental Residual Concept Bottleneck Models
Abstract
Concept Bottleneck Models (CBMs) map the black-box visual representations extracted by deep neural networks onto a set of interpretable concepts and use the concepts to make predictions enhancing the transparency of the decision-making process. Multimodal pre-trained models can match visual representations with textual concept embeddings allowing for obtaining the interpretable concept bottleneck without the expertise concept annotations. Recent research has focused on the concept bank establishment and the high-quality concept selection. However it is challenging to construct a comprehensive concept bank through humans or large language models which severely limits the performance of CBMs. In this work we propose the Incremental Residual Concept Bottleneck Model (Res-CBM) to address the challenge of concept completeness. Specifically the residual concept bottleneck model employs a set of optimizable vectors to complete missing concepts then the incremental concept discovery module converts the complemented vectors with unclear meanings into potential concepts in the candidate concept bank. Our approach can be applied to any user-defined concept bank as a post-hoc processing method to enhance the performance of any CBMs. Furthermore to measure the descriptive efficiency of CBMs the Concept Utilization Efficiency (CUE) metric is proposed. Experiments show that the Res-CBM outperforms the current state-of-the-art methods in terms of both accuracy and efficiency and achieves comparable performance to black-box models across multiple datasets.
Related Material