NoiseTransfer: Image Noise Generation with Contrastive Embeddings

Seunghwan Lee, Tae Hyun Kim; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 3569-3585

Abstract


Deep image denoising networks have achieved impressive success with the help of a considerably large number of synthetic train datasets. However, real-world denoising is a still challenging problem due to the dissimilarity between distributions of real and synthetic noisy datasets. Although several real-world noisy datasets have been presented, the number of train datasets (i.e., pairs of clean and real noisy images) is limited, and acquiring more real noise datasets is laborious and expensive. To mitigate this problem, numerous attempts to simulate real noise models using generative models have been studied. Nevertheless, previous works had to train multiple networks to handle multiple different noise distributions. By contrast, we propose a new generative model that can synthesize noisy images with multiple different noise distributions. Specifically, we adopt recent contrastive learning to learn distinguishable latent features of the noise. Moreover, our model can generate new noisy images by transferring the noise characteristics solely from a single reference noisy image. We demonstrate the accuracy and the effectiveness of our noise model for both known and unknown noise removal.

Related Material


[pdf] [supp] [code]
[bibtex]
@InProceedings{Lee_2022_ACCV, author = {Lee, Seunghwan and Kim, Tae Hyun}, title = {NoiseTransfer: Image Noise Generation with Contrastive Embeddings}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {3569-3585} }