-
[pdf]
[arXiv]
[bibtex]@InProceedings{Sah_2025_CVPR, author = {Sah, Sudhakar and Kumar, Ravish and Ganji, Darshan C. and Saboori, Ehsan}, title = {ActNAS : Generating Efficient YOLO Models using Activation NAS}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2025}, pages = {1845-1853} }
ActNAS : Generating Efficient YOLO Models using Activation NAS
Abstract
Activation functions introduce non-linearity into neural networks, allowing them to learn complex patterns. Different activation functions impact differently on speed and accuracy; for instance, ReLU is fast but often less precise, while SiLU offers higher accuracy at the expense of speed. Traditionally, a single activation function is used throughout a model. In this work, we conducted a comprehensive study on the effects of using mixed activation functions in YOLO-based models, examining their impact on latency, memory usage, and accuracy across CPU, NPU, and GPU edge devices. We propose Activation NAS (ActNAS)-a Hardware-Aware Neural Architecture Search (HA-NAS) method that optimizes activation functions per layer for specific hardware. ActNAS-generated models maintain comparable mean Average Precision (mAP) to baselines, while achieving up to 1.67x faster inference and/or 64.15% lower memory usage. Additionally, we demonstrate that hardware-aware models learn to leverage architectural and compiler-level optimizations, resulting in highly efficient performance tailored to each hardware platform.
Related Material

