
[pdf]
[supp]
[bibtex]@InProceedings{Shukla_2022_WACV, author = {Shukla, Megh}, title = {Bayesian Uncertainty and Expected Gradient Length  Regression: Two Sides of the Same Coin?}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {23672376} }
Bayesian Uncertainty and Expected Gradient Length  Regression: Two Sides of the Same Coin?
Abstract
Active learning algorithms select a subset of data for annotation to maximize the model performance on a budget. One such algorithm is Expected Gradient Length, which as the name suggests uses the approximate gradient induced per example in the sampling process. While Expected Gradient Length has been successfully used for classification and regression, the formulation for regression remains intuitively driven. Hence, our theoretical contribution involves deriving this formulation, thereby supporting experimental evidence [4, 5]. Subsequently, we show that expected gradient length in regression is equivalent to Bayesian uncertainty [22]. If certain assumptions are infeasible, our algorithmic contribution (EGL++) approximates the effect of ensembles with a single deterministic network. Instead of computing multiple possible inferences per input, we leverage previously annotated samples to quantify the probability of previous labels being the true label. Such an approach allows us to extend expected gradient length to a new task: human pose estimation. We perform experimental validation on two human pose datasets (MPII and LSP/LSPET), highlighting the interpretability and competitiveness of EGL++ with different active learning algorithms for human pose estimation.
Related Material