Using Mixture of Expert Models to Gain Insights Into Semantic Segmentation

Svetlana Pavlitskaya, Christian Hubschneider, Michael Weber, Ruby Moritz, Fabian Huger, Peter Schlicht, Marius Zollner; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 342-343

Abstract


Not only correct scene understanding, but also ability to understand the decision making process of neural networks is essential for safe autonomous driving. Current work mainly focuses on uncertainty measures, often based on Monte Carlo dropout, to gain at least some insight into a models confidence. We investigate a mixture of experts architecture to achieve additional interpretability while retaining comparable result quality. By being able to use both the overall model output as well as retaining the possibility to take into account individual expert outputs, the agreement or disagreement between those individual outputs can be used to gain insights into the decision process. Expert networks are trained by splitting the input data into semantic subsets, e.g. corresponding to different driving scenarios, to become experts in those domains. An additional gating network that is also trained on the same input data is consequently used to weight the output of individual experts. We evaluate this mixture of expert setup on the A2D2 dataset and achieve similar results to a baseline FRRN network trained on all available data, while getting additional information.

Related Material


[pdf]
[bibtex]
@InProceedings{Pavlitskaya_2020_CVPR_Workshops,
author = {Pavlitskaya, Svetlana and Hubschneider, Christian and Weber, Michael and Moritz, Ruby and Huger, Fabian and Schlicht, Peter and Zollner, Marius},
title = {Using Mixture of Expert Models to Gain Insights Into Semantic Segmentation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}