Causal-SETR: A SEgmentation TRansformer Variant Based on Causal Intervention

Wei Li, Zhixin Li; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 756-772

Abstract


We present a novel SEgmentaion TRansformer variant based on causal intervention. It serves as an improved vision encoder for semantic segmentation. Many studies have proved that vision transformers (ViT) can achieve a competitive benchmark on these downstream tasks, which shows that they can learn feature representations well. In other words, it is good at observing the instance from the image. However, in the human visual system, to recognize the objects in the scene, it is necessary to observe the objects themselves and introduce some prior knowledge for producing higher confidence results. Inspired by this, we introduced a structural causal model (SCM) to model images, category labels, and context. Beyond observing, we propose a causal intervention method by removing the confounding bias of global context and plugging it in the ViT encoder. Unlike other sequence-to-sequence prediction tasks, we use causal intervention instead of likelihood. Besides, the proxy training objective of the framework is to predict the contextual objects of a region. Finally, we combine this encoder with the segmentation decoder. Experiments show that our proposed method is flexible and effective.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2022_ACCV, author = {Li, Wei and Li, Zhixin}, title = {Causal-SETR: A SEgmentation TRansformer Variant Based on Causal Intervention}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {756-772} }