-
[pdf]
[bibtex]@InProceedings{Zhang_2021_CVPR, author = {Zhang, Shengyu and Jiang, Tan and Wang, Tan and Kuang, Kun and Zhao, Zhou and Zhu, Jianke and Yu, Jin and Yang, Hongxia and Wu, Fei}, title = {DeVLBert: Out-of-Distribution Visio-Linguistic Pretraining With Causality}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {1744-1747} }
DeVLBert: Out-of-Distribution Visio-Linguistic Pretraining With Causality
Abstract
In this paper, we propose to investigate out-of-domain visio-linguistic pretraining, where the pretraining data distribution differs from that of downstream data on which the pretrained model will be fine-tuned. Existing methods for this problem are purely likelihood-based, leading to the spurious correlations and hurt the generalization ability when transferred to out-of-domain downstream tasks. By spurious correlation, we mean that the conditional probability of one token (object or word) given another one can be high (due to the dataset biases) without robust (causal) relationships between them. To mitigate such dataset biases, we propose a Deconfounded Visio-Linguistic Bert framework, abbreviated as DeVLBert, to perform intervention-based learning. We borrow the idea of the backdoor adjustment from the research field of causality and propose several neural-network based architectures for Bert-style out-of-domain pretraining. The quantitative results on three downstream tasks, Image Retrieval (IR), Zero-shot IR, and Visual Question Answering, show the effectiveness of DeVLBert by boosting generalization ability.
Related Material