Scratching Visual Transformer's Back with Uniform Attention

Nam Hyeon-Woo, Kim Yu-Ji, Byeongho Heo, Dongyoon Han, Seong Joon Oh, Tae-Hyun Oh; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 5807-5818

Abstract


The favorable performance of Vision Transformers (ViTs) is often attributed to the multi-head self-attention (MSA), which enables global interactions at each layer of a ViT model. Previous works acknowledge the property of long-range dependency for the effectiveness in MSA. In this work, we study the role of MSA in terms of the different axis, density. Our preliminary analyses suggest that the spatial interactions of learned attention maps are close to dense interactions rather than sparse ones. This is a curious phenomenon because dense attention maps are harder for the model to learn due to softmax. We interpret this opposite behavior against softmax as a strong preference for the ViT models to include dense interaction. We thus manually insert the dense uniform attention to each layer of the ViT models to supply the much-needed dense interactions. We call this method Context Broadcasting, CB. Our study demonstrates the inclusion of CB takes the role of dense attention, and thereby reduces the degree of density in the original attention maps by complying softmax in MSA. We also show that, with negligible costs of CB (1 line in your model code and no additional parameters), both the capacity and generalizability of the ViT models are increased.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Hyeon-Woo_2023_ICCV, author = {Hyeon-Woo, Nam and Yu-Ji, Kim and Heo, Byeongho and Han, Dongyoon and Oh, Seong Joon and Oh, Tae-Hyun}, title = {Scratching Visual Transformer's Back with Uniform Attention}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {5807-5818} }