Uncertainty-Guided Transformer Reasoning for Camouflaged Object Detection

Fan Yang, Qiang Zhai, Xin Li, Rui Huang, Ao Luo, Hong Cheng, Deng-Ping Fan; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 4146-4155

Abstract


Spotting objects that are visually adapted to their surroundings is challenging for both humans and AI. Conventional generic / salient object detection techniques are suboptimal for this task because they tend to only discover easy and clear objects, while overlooking the difficult-to-detect ones with inherent uncertainties derived from indistinguishable textures. In this work, we contribute a novel approach using a probabilistic representational model in combination with transformers to explicitly reason under uncertainties, namely uncertainty-guided transformer reasoning (UGTR), for camouflaged object detection. The core idea is to first learn a conditional distribution over the backbone's output to obtain initial estimates and associated uncertainties, and then reason over these uncertain regions with attention mechanism to produce final predictions. Our approach combines the benefits of both Bayesian learning and Transformer-based reasoning, allowing the model to handle camouflaged object detection by leveraging both deterministic and probabilistic information. We empirically demonstrate that our proposed approach can achieve higher accuracy than existing state-of-the-art models on CHAMELEON, CAMO and COD10K datasets. Code is available at https://github.com/fanyang587/UGTR.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2021_ICCV, author = {Yang, Fan and Zhai, Qiang and Li, Xin and Huang, Rui and Luo, Ao and Cheng, Hong and Fan, Deng-Ping}, title = {Uncertainty-Guided Transformer Reasoning for Camouflaged Object Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {4146-4155} }