LowFormer: Hardware Efficient Design for Convolutional Transformer Backbones

Moritz Nottebaum, Matteo Dunnhofer, Christian Micheloni; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 7008-7018

Abstract


Research in efficient vision backbones is evolving into models that are a mixture of convolutions and transformer blocks. A smart combination of both architecture-wise and component-wise is mandatory to excel in the speed-accuracy trade-off. Most publications focus on maximizing accuracy and utilize MACs (multiply accumulate operations) as an efficiency metric. The latter however often do not measure accurately how fast a model actually is due to factors like memory access cost and degree of parallelism. We analyzed common modules and architectural design choices for backbones not in terms of MACs but rather in actual throughput and latency as the combination of the latter two is a better representation of the efficiency of models in real applications. We applied the conclusions taken from that analysis to create a recipe for increasing hardware-efficiency in macro design. Additionally we introduce a simple slimmed-down version of Multi-Head Self-Attention that aligns with our analysis. We combine both macro and micro design to create a new family of hardware-efficient backbone networks called LowFormer. LowFormer achieves a remarkable speedup in terms of throughput and latency while achieving similar or better accuracy than current state-of-the-art efficient backbones. In order to prove the generalizability of our hardware-efficient design we evaluate our method on GPU mobile GPU and ARM CPU. We further show that the downstream tasks object detection and semantic segmentation profit from our hardware-efficient architecture. Code and models are available at https://github.com/altair199797/LowFormer.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Nottebaum_2025_WACV, author = {Nottebaum, Moritz and Dunnhofer, Matteo and Micheloni, Christian}, title = {LowFormer: Hardware Efficient Design for Convolutional Transformer Backbones}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {7008-7018} }