-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Sehwag_2022_CVPR, author = {Sehwag, Vikash and Hazirbas, Caner and Gordo, Albert and Ozgenel, Firat and Canton, Cristian}, title = {Generating High Fidelity Data From Low-Density Regions Using Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {11492-11501} }
Generating High Fidelity Data From Low-Density Regions Using Diffusion Models
Abstract
Our work focuses on addressing sample deficiency from low-density regions of data manifold in common image datasets. We leverage diffusion process based generative models to synthesize novel images from low-density regions. We observe that uniform sampling from diffusion models predominantly samples from high-density regions of the data manifold. Therefore, we modify the sampling process to guide it towards low-density regions while simultaneously maintaining the fidelity of synthetic data. We rigorously demonstrate that our process successfully generates novel high fidelity samples from low-density regions. We further examine generated samples and show that the model does not memorize low-density data and indeed learns to generate novel samples from low-density regions.
Related Material