Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Weihan Shen, Xiaolong Zhu, Xiu Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 8941-8951

Abstract


Using reinforcement learning with human feedback (RLHF) has shown significant promise in fine-tuning diffusion models. Previous methods start by training a reward model that aligns with human preferences then leverage RL techniques to fine-tune the underlying models. However crafting an efficient reward model demands extensive datasets optimal architecture and manual hyperparameter tuning making the process both time and cost-intensive. The direct preference optimization (DPO) method effective in fine-tuning large language models eliminates the necessity for a reward model. However the extensive GPU memory requirement of the diffusion model's denoising process hinders the direct application of the DPO method. To address this issue we introduce the Direct Preference for Denoising Diffusion Policy Optimization (D3PO) method to directly fine-tune diffusion models. The theoretical analysis demonstrates that although D3PO omits training a reward model it effectively functions as the optimal reward model trained using human feedback data to guide the learning process. This approach requires no training of a reward model proving to be more direct cost-effective and minimizing computational overhead. In experiments our method uses the relative scale of objectives as a proxy for human preference delivering comparable results to methods using ground-truth rewards. Moreover D3PO demonstrates the ability to reduce image distortion rates and generate safer images overcoming challenges lacking robust reward models. Our code is publicly available at https://github.com/yk7333/D3PO.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yang_2024_CVPR, author = {Yang, Kai and Tao, Jian and Lyu, Jiafei and Ge, Chunjiang and Chen, Jiaxin and Shen, Weihan and Zhu, Xiaolong and Li, Xiu}, title = {Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {8941-8951} }