Building Explainable AI Evaluation for Autonomous Perception

Chi Zhang, Biyao Shang, Ping Wei, Li Li, Yuehu Liu, Nanning Zheng; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 20-23


The development of the robust visual intelligence is one of the long-term challenging problems. From the perspective of artificial intelligence evaluation, the need to discover and explain the potential shortness of the evaluated intelligent algorithms/systems as well as the need to evaluate the intelligence level of such testees are of equal importance. In this paper, we propose a possible solution to these challenges: Explainable Evaluation for visual intelligence. Compared to the existing work on Explainable AI, we focus on the problem setting where the internal mechanisms of AI algorithms are sophisticated, heterogeneous or unreachable. In this case, the interpretability of test output is formulated as an semantic embedding to the existing correlation between factors of data variances and test outputs. Dictionary learning is introduced to jointly estimate the semantic mapping and the semantic representations for explanation. The optimal solution of proposed method could be reached via an alternating optimization process. The application of the "Explainable AI Evaluation" will strengthen the influence of objective assessment for visual intelligence.

Related Material

author = {Zhang, Chi and Shang, Biyao and Wei, Ping and Li, Li and Liu, Yuehu and Zheng, Nanning},
title = {Building Explainable AI Evaluation for Autonomous Perception},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}