RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?

Yuki Tatsunami, Masato Taki; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 3172-3188

Abstract


For the past ten years, CNN has reigned supreme in the world of computer vision, but recently, Transformer has been on the rise. However, the quadratic computational cost of self-attention has become a serious problem in practice applications. There has been much research on architectures without CNN and self-attention in this context. In particular, MLP-Mixer is a simple architecture designed using MLPs and hit an accuracy comparable to the Vision Transformer. However, the only inductive bias in this architecture is the embedding of tokens. This leaves open the possibility of incorporating a non-convolutional (or non-local) inductive bias into the architecture, so we used two simple ideas to incorporate inductive bias into the MLP-Mixer while taking advantage of its ability to capture global correlations. A way is to divide the token-mixing block vertically and horizontally. Another way is to make spatial correlations denser among some channels of token-mixing. With this approach, we were able to improve the accuracy of the MLP-Mixer while reducing its parameters and computational complexity. The small model that is RaftMLP-S is comparable to the state-of-the-art global MLP-based model in terms of parameters and efficiency per calculation. Our source code is available at https://github.com/okojoalg/raft-mlp.

Related Material


[pdf] [supp] [arXiv] [code]
[bibtex]
@InProceedings{Tatsunami_2022_ACCV, author = {Tatsunami, Yuki and Taki, Masato}, title = {RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {3172-3188} }