Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search

Kaicheng Yu, Rene Ranftl, Mathieu Salzmann; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 13723-13732

Abstract


Weight sharing has become a de facto standard in neural architecture search because it enables the search to be done on commodity hardware. However, recent works have empirically shown a ranking disorder between the performance of stand-alone architectures and that of the corresponding shared-weight networks. This violates the main assumption of weight-sharing NAS algorithms, thus limiting their effectiveness. We tackle this issue by proposing a regularization term that aims to maximize the correlation between the performance rankings of the shared-weight network and that of the standalone architectures using a small set of landmark architectures. We incorporate our regularization term into three different NAS algorithms and show that it consistently improves performance across algorithms, search-spaces, and tasks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yu_2021_CVPR, author = {Yu, Kaicheng and Ranftl, Rene and Salzmann, Mathieu}, title = {Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {13723-13732} }