Improved Noise2Noise Denoising With Limited Data
Deep learning methods have proven to be very effective for the task of image denoising even when clean reference images are not available. In particular, Noise2Noise, which requires pairs of noisy images during the training phase, has been shown to yield results as good as approaches using pairs of noisy and clean images (Noise2Clean). However, the performance of Noise2Noise drops when the amount of training data is reduced, limiting its capability in practical scenarios. In this work, an analysis of the Noise2Noise learning strategy is done using real noise and synthetic datasets. This paper demonstrates, using diverse network architectures and loss functions, that the duplicity of information in the noisy pairs can be exploited to reach increased denoising performance of Noise2Noise. Additionally, the issue of overfitting in Noise2Noise is analyzed, given its relevance when training with limited data, and an interpretable early termination criterion is proposed.