RepViT: Revisiting Mobile CNN From ViT Perspective

Ao Wang, Hui Chen, Zijia Lin, Jungong Han, Guiguang Ding; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 15909-15920

Abstract


Recently lightweight Vision Transformers (ViTs) demonstrate superior performance and lower latency compared with lightweight Convolutional Neural Networks (CNNs) on resource-constrained mobile devices. Researchers have discovered many structural connections between lightweight ViTs and lightweight CNNs. However the notable architectural disparities in the block structure macro and micro designs between them have not been adequately examined. In this study we revisit the efficient design of lightweight CNNs from ViT perspective and emphasize their promising prospect for mobile devices. Specifically we incrementally enhance the mobile-friendliness of a standard lightweight CNN i.e. MobileNetV3 by integrating the efficient architectural designs of lightweight ViTs. This ends up with a new family of pure lightweight CNNs namely RepViT. Extensive experiments show that RepViT outperforms existing state-of-the-art lightweight ViTs and exhibits favorable latency in various vision tasks. Notably on ImageNet RepViT achieves over 80% top-1 accuracy with 1.0 ms latency on an iPhone 12 which is the first time for a lightweight model to the best of our knowledge. Besides when RepViT meets SAM our RepViT-SAM can achieve nearly 10xfaster inference than the advanced MobileSAM. Codes and models are available at https://github.com/THU-MIG/RepViT.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Ao and Chen, Hui and Lin, Zijia and Han, Jungong and Ding, Guiguang}, title = {RepViT: Revisiting Mobile CNN From ViT Perspective}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {15909-15920} }