Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention
Jinkyu Kim, John Canny; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2942-2950
Abstract
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers etc., can understand what triggered a particular behavior. Here we explore the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). Our approach is two-stage. In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. We demonstrate the effectiveness of our model on three datasets totaling 16 hours of driving. We first show that training with attention does not degrade the performance of the end-to-end network. Then we show that the network causally cues on a variety of features that are used by humans while driving.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Kim_2017_ICCV,
author = {Kim, Jinkyu and Canny, John},
title = {Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}