-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Mills_2024_CVPR, author = {Mills, Keith G. and Han, Fred X. and Salameh, Mohammad and Lu, Shengyao and Zhou, Chunhua and He, Jiao and Sun, Fengyu and Niu, Di}, title = {Building Optimal Neural Architectures using Interpretable Knowledge}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {5726-5735} }
Building Optimal Neural Architectures using Interpretable Knowledge
Abstract
Neural Architecture Search is a costly practice. The fact that a search space can span a vast number of design choices with each architecture evaluation taking nontrivial overhead makes it hard for an algorithm to sufficiently explore candidate networks. In this paper we propose AutoBuild a scheme which learns to align the latent embeddings of operations and architecture modules with the ground-truth performance of the architectures they appear in. By doing so AutoBuild is capable of assigning interpretable importance scores to architecture modules such as individual operation features and larger macro operation sequences such that high-performance neural networks can be constructed without any need for search. Through experiments performed on state-of-the-art image classification segmentation and Stable Diffusion models we show that by mining a relatively small set of evaluated architectures AutoBuild can learn to build high-quality architectures directly or help to reduce search space to focus on relevant areas finding better architectures that outperform both the original labeled ones and ones found by search baselines. Code available at https://github.com/Ascend-Research/AutoBuild
Related Material