Modeling Aleatoric Uncertainty for Camouflaged Object Detection

Jiawei Liu, Jing Zhang, Nick Barnes; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 1445-1454


Aleatoric uncertainty captures noise within the observations. For camouflaged object detection, due to similar appearance of the camouflaged foreground and the background, it's difficult to obtain highly accurate annotations, especially annotations around object boundaries. We argue that training directly with the noisy camouflage map may lead to a model of poor generalization ability. In this paper, we introduce an explicitly aleatoric uncertainty estimation technique to represent predictive uncertainty due to noisy labeling. Specifically, we present a confidence-aware camouflaged object detection (COD) framework using dynamic supervision to produce both an accurate camouflage map and a reliable aleatoric uncertainty. Different from existing techniques that produce deterministic prediction following the point estimation pipeline, our framework formalises aleatoric uncertainty as probability distribution over model output and the input image. We claim that, once trained, our confidence estimation network can evaluate the pixel-wise accuracy of the prediction without relying on the ground truth camouflage map. Extensive results illustrate the superior performance of the proposed model in explaining the camouflage prediction. Our codes are available at

Related Material

@InProceedings{Liu_2022_WACV, author = {Liu, Jiawei and Zhang, Jing and Barnes, Nick}, title = {Modeling Aleatoric Uncertainty for Camouflaged Object Detection}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1445-1454} }