FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search

Xiangxiang Chu, Bo Zhang, Ruijun Xu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 12239-12248

Abstract


One of the most critical problems in weight-sharing neural architecture search is the evaluation of candidate models within a predefined search space. In practice, a one-shot supernet is trained to serve as an evaluator. A faithful ranking certainly leads to more accurate searching results. However, current methods are prone to making misjudgments. In this paper, we prove that their biased evaluation is due to inherent unfairness in the supernet training. In view of this, we propose two levels of constraints: expectation fairness and strict fairness. Particularly, strict fairness ensures equal optimization opportunities for all choice blocks throughout the training, which neither overestimates nor underestimates their capacity. We demonstrate that this is crucial for improving the confidence of models' ranking. Incorporating the one-shot supernet trained under the proposed fairness constraints with a multi-objective evolutionary search algorithm, we obtain various state-of-the-art models, e.g., FairNAS-A attains 77.5% top-1 validation accuracy on ImageNet.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chu_2021_ICCV, author = {Chu, Xiangxiang and Zhang, Bo and Xu, Ruijun}, title = {FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12239-12248} }