Revisiting Training-Free NAS Metrics: An Efficient Training-Based Method

Taojiannan Yang, Linjie Yang, Xiaojie Jin, Chen Chen; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4751-4760

Abstract


Recent neural architecture search (NAS) works proposed training-free metrics to rank networks which largely reduced the search cost in NAS. In this paper, we revisit these training-free metrics and find that: (1) the number of parameters (#Param), which is the most straightforward training-free metric, is overlooked in previous works but is surprisingly effective, (2) recent training-free metrics largely rely on the #Param information to rank networks. Our experiments show that the performance of recent training-free metrics drops dramatically when the #Param information is not available. Motivated by these observations, we argue that metrics less correlated with the #Param are desired to provide additional information for NAS. We propose a light-weight training-based metric which has a weak correlation with the #Param while achieving better performance than training-free metrics at a lower search cost. Specifically, on DARTS search space, our method completes searching directly on ImageNet in only 2.6 GPU hours and achieves a top-1/top-5 error rate of 24.1%/7.1%, which is competitive among state-of-the-art NAS methods.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yang_2023_WACV, author = {Yang, Taojiannan and Yang, Linjie and Jin, Xiaojie and Chen, Chen}, title = {Revisiting Training-Free NAS Metrics: An Efficient Training-Based Method}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {4751-4760} }