Language Model Guided Interpretable Video Action Reasoning

Ning Wang, Guangming Zhu, HS Li, Liang Zhang, Syed Afaq Ali Shah, Mohammed Bennamoun; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 18878-18887

Abstract


Although neural networks excel in video action recognition tasks their "black-box" nature makes it challenging to understand the rationale behind their decisions. Recent approaches used inherently interpretable models to analyze video actions in a manner akin to human reasoning. However it has been observed that these interpretable models tend to underperform when compared to their black-box counterparts. In this work we present a new framework called Language-guided Interpretable Action Recognition framework (LaIAR). This framework leverages knowledge from language models to enhance both the recognition capabilities and the interpretability of video models. In essence we reframe the challenge of understanding video model decisions as a task of aligning video and language models. Using the logical reasoning captured by the language model we steer the training of the video model. This integrated approach not only improves the video model's adaptability to different domains but also boosts its overall performance. Extensive experiments on Charades and CAD-120 datasets demonstrate the superior performance and interpretability of our proposed method. The code of LaIAR is available at https://github.com/NingWang2049/LaIAR.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Ning and Zhu, Guangming and Li, HS and Zhang, Liang and Shah, Syed Afaq Ali and Bennamoun, Mohammed}, title = {Language Model Guided Interpretable Video Action Reasoning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {18878-18887} }