-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Liang_2024_CVPR, author = {Liang, Youwei and He, Junfeng and Li, Gang and Li, Peizhao and Klimovskiy, Arseniy and Carolan, Nicholas and Sun, Jiao and Pont-Tuset, Jordi and Young, Sarah and Yang, Feng and Ke, Junjie and Dvijotham, Krishnamurthy Dj and Collins, Katherine M. and Luo, Yiwen and Li, Yang and Kohlhoff, Kai J and Ramachandran, Deepak and Navalpakkam, Vidhya}, title = {Rich Human Feedback for Text-to-Image Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {19401-19411} }
Rich Human Feedback for Text-to-Image Generation
Abstract
Recent Text-to-Image (T2I) generation models such as Stable Diffusion and Imagen have made significant progress in generating high-resolution images based on text descriptions. However many generated images still suffer from issues such as artifacts/implausibility misalignment with text descriptions and low aesthetic quality. Inspired by the success of Reinforcement Learning with Human Feedback (RLHF) for large language models prior works collected human-provided scores as feedback on generated images and trained a reward model to improve the T2I generation. In this paper we enrich the feedback signal by (i) marking image regions that are implausible or misaligned with the text and (ii) annotating which words in the text prompt are misrepresented or missing on the image. We collect such rich human feedback on 18K generated images (RichHF-18K) and train a multimodal transformer to predict the rich feedback automatically. We show that the predicted rich human feedback can be leveraged to improve image generation for example by selecting high-quality training data to finetune and improve the generative models or by creating masks with predicted heatmaps to inpaint the problematic regions. Notably the improvements generalize to models (Muse) beyond those used to generate the images on which human feedback data were collected (Stable Diffusion variants). The RichHF-18K data set will be released in our GitHub repository: https://github.com/google-research/google-research/tree/master/richhf_18k.
Related Material