-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Munir_2025_WACV, author = {Munir, Mustafa and Rahman, Md Mostafijur and Marculescu, Radu}, title = {RapidNet: Multi-Level Dilated Convolution Based Mobile Backbone}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {8291-8301} }
RapidNet: Multi-Level Dilated Convolution Based Mobile Backbone
Abstract
Vision transformers (ViTs) have dominated computer vision in recent years. However ViTs are computationally expensive and not well suited for mobile devices; this led to the prevalence of convolutional neural network (CNN) and ViT-based hybrid models for mobile vision applications. Recently Vision GNN (ViG) and CNN hybrid models have also been proposed for mobile vision tasks. However all of these methods remain slower compared to pure CNN-based models. In this work we propose Multi-Level Dilated Convolutions to devise a purely CNN-based mobile backbone. Using Multi-Level Dilated Convolutions allows for a larger theoretical receptive field than standard convolutions. Different levels of dilation also allow for interactions between the short-range and long-range features in an image. Experiments show that our proposed model outperforms state-of-the-art (SOTA) mobile CNN ViT ViG and hybrid architectures in terms of accuracy and/or speed on image classification object detection instance segmentation and semantic segmentation. Our fastest model RapidNet-Ti achieves 76.3% top-1 accuracy on ImageNet-1K with 0.9 ms inference latency on an iPhone 13 mini NPU which is faster and more accurate than MobileNetV2x1.4 (74.7% top-1 with 1.0 ms latency). Our work shows that pure CNN architectures can beat SOTA hybrid and ViT models in terms of accuracy and speed when designed properly.
Related Material