Always Skip Attention

Yiping Ji, Hemanth Saratchandran, Peyman Moghadam, Simon Lucey; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 23115-23123

Abstract


We highlight a curious empirical result within modern Vision Transformers (ViTs). Specifically, self-attention catastrophically fails to train unless it is used in conjunction with a skip connection. This is in contrast to other elements of a ViT that continue to exhibit good performance (albeit suboptimal) when skip connections are removed. Further, we show that this critical dependence on skip connections is a relatively new phenomenon, with previous deep architectures (e.g., CNNs) exhibiting good performance in their absence. In this paper, we theoretically characterize that the self-attention mechanism is fundamentally ill-conditioned and is, therefore, uniquely dependent on skip connections for regularization. Additionally, we propose Token Graying (TG), a simple yet effective complement (to skip connections) that further improves the condition of in- put tokens. We validate our approach in both supervised and self-supervised training methods

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Ji_2025_ICCV, author = {Ji, Yiping and Saratchandran, Hemanth and Moghadam, Peyman and Lucey, Simon}, title = {Always Skip Attention}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {23115-23123} }