Exploring the Potential of Neural Dataset Search
Although we have witnessed Neural Architecture Search (NAS), which automatically explores architecture for best performance, the discussion has not advanced considering a dataset. We discuss the potential of Neural Dataset Search (NDS), which explores the appropriate configuration in a pre-training dataset to achieve a better pre-training effect. The NDS is designed to train in order to find the optimal parameters in the pre-training dataset for a given network architecture and downstream tasks. This allows for predicting the optimal pre-training parameters for a new unseen task in one shot. Thus, the NDS has the potential to bottom up the effectiveness of the pre-training. Therefore, this paper focuses on formula-driven supervised learning, and as a first consideration, we verify the appropriate configuration in Residual Network (ResNet) and Fractal DataBase (FractalDB). From the experimental results, we confirmed that the FractalDB generation parameters that provide the best pre-training effect are different for each ResNet- 18, 50, 152 . These observations reveal that there is an adapted image representation or dataset structure (e.g., input size, parameter, category) for a particular architecture. We hope these results will encourage further research on NDS that fully exploits the pre-training of synthetic images.