Computing the Testing Error Without a Testing Set

Ciprian A. Corneanu, Sergio Escalera, Aleix M. Martinez; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 2677-2685

Abstract


Deep Neural Networks (DNNs) have revolutionized computer vision. We now have DNNs that achieve top (accuracy) results in many problems, including object recognition, facial expression analysis, and semantic segmentation, to name but a few. The design of the DNNs that achieve top results is, however, non-trivial and mostly done by trail-and-error. That is, typically, researchers will derive many DNN architectures (i.e., topologies) and then test them on multiple datasets. However, there are no guarantees that the selected DNN will perform well in the real world. One can use a testing set to estimate the performance gap between the training and testing sets, but avoiding overfitting-to-the-testing-data is of concern. Using a sequestered testing data may address this problem, but this requires a constant update of the dataset, a very expensive venture. Here, we derive an algorithm to estimate the performance gap between training and testing without the need of a testing dataset. Specifically, we derive a set of persistent topology measures that identify when a DNN is learning to generalize to unseen samples. We provide extensive experimental validation on multiple networks and datasets to demonstrate the feasibility of the proposed approach.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Corneanu_2020_CVPR,
author = {Corneanu, Ciprian A. and Escalera, Sergio and Martinez, Aleix M.},
title = {Computing the Testing Error Without a Testing Set},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}