Enlightening Deep Neural Networks With Knowledge of Confounding Factors

Yu Zhong, Gil Ettinger; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1077-1086

Abstract


Despite the popularity of deep neural networks, we still strive to better understand the underlying mechanism that drives their success. Motivated by observations that neurons in trained deep nets predict variation explaining factors indirectly related to the training tasks, we recognize that a deep network learns representations more general than the task at hand in order to disentangle impacts of multiple confounding factors governing the data and isolate the effects of the concerning factors. Consequently, we propose to augment training of deep models with information on auxiliary explanatory data factors to boost this disentanglement and improve the generalizability of trained models to compute better feature representations. We adopt this principle to build a pose-aware DCNN and demonstrate that auxiliary pose information improves the classification accuracy. It is readily applicable to improve the recognition and classification performance for various deep-learning applications.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Zhong_2017_ICCV,
author = {Zhong, Yu and Ettinger, Gil},
title = {Enlightening Deep Neural Networks With Knowledge of Confounding Factors},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}