Efficient Priors for Scalable Variational Inference in Bayesian Deep Neural Networks

Ranganath Krishnan, Mahesh Subedar, Omesh Tickoo; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Stochastic variational inference for Bayesian deep neural networks (DNNs) requires specifying priors and approximate posterior distributions for neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational inference to deeper architectures involving high dimensional weight space. Based on empirical Bayes approach, we propose Bayesian MOdel Priors Extracted from Deterministic DNN (MOPED) method to choose meaningful prior distributions over weight space using deterministic weights derived from the pretrained DNNs of equivalent architecture. We empirically evaluate the proposed approach on real-world applications including image classification, video activity recognition and audio classification tasks with varying complex neural network architectures. The proposed method enables scalable variational inference with faster training convergence and provides reliable uncertainty quantification.

Related Material


[pdf]
[bibtex]
@InProceedings{Krishnan_2019_ICCV,
author = {Krishnan, Ranganath and Subedar, Mahesh and Tickoo, Omesh},
title = {Efficient Priors for Scalable Variational Inference in Bayesian Deep Neural Networks},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}