Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities

Sanghwa Hong; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 2957-2966

Abstract


Recently the field of generative models has advanced significantly with the introduction of Diffusion Probabilistic Models (DPMs). However the discovery of Lipschitz Singularities within DPMs reveals a vulnerability to subtle adversarial attacks particularly at timesteps close to zero. This paper introduces a novel approach to enhance the robustness of DPMs against adversarial attacks specifically addressing the challenge posed by Lipschitz Singularities. By implementing a dynamic scheduling strategy of sigma through Reinforcement Learning (RL) we mitigate the adverse effects stemming from adversarial attacks that exploit vulnerabilities linked to Lipschitz singularities. Experimental results demonstrate the effectiveness of our approach in maintaining high-quality image generation.

Related Material


[pdf]
[bibtex]
@InProceedings{Hong_2024_CVPR, author = {Hong, Sanghwa}, title = {Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {2957-2966} }