DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models
Ying Fan*, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu,
Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, Kimin Lee*
*Equal technical contribution
Google Research, University of Wisconsin-Madison, UC Berkeley
[Paper] [Code: soon]
Abstract
Learning from human feedback has been shown to improve text-to-image models. These techniques first learn a reward function that captures what humans care about in the task and then improve the models based on the learned reward function. Even though relatively simple approaches (e.g., rejection sampling based on reward scores) have been investigated, fine-tuning text-to-image models with the reward function remains challenging. In this work, we propose using online reinforcement learning (RL) to fine-tune text-to-image models. We focus on diffusion models, defining the fine-tuning task as an RL problem, and updating the pre-trained text-to-image diffusion models using policy gradient to maximize the feedback- trained reward. Our approach, coined DPOK, integrates policy optimization with KL regularization. We conduct an analysis of KL regularization for both RL fine-tuning and supervised fine-tuning. In our experiments, we show that DPOK is generally superior to supervised fine-tuning with respect to both image-text alignment and image quality.
Summary of Contribution
We frame the optimization of the expected reward (w.r.t. an LHF-reward) of the image output from a diffusion model given text prompts as an online RL problem. Moreover, we present DPOK: Diffusion Policy Optimization with KL regularization, which utilizes KL regularization w.r.t. the pre-trained text-to-image model as an implicit reward to stabilize RL fine-tuning
We study incorporating KL regularization into supervised fine-tuning of diffusion models, which can mitigate some failure modes (e.g., generating over-saturated images.
We discuss key differences between supervised fine-tuning and online fine-tuning of text-to- image models.
Comparison of Supervised Fine-tuning and RL Fine-tuning
We first evaluate the performance of the original and fine-tuned text-to-image models w.r.t. specific capabilities, such as generating objects with specific colors, counts, or locations; and composing multiple objects.
We find that DPOK is generally superior to supervised fine-tuning with respect to both image-text alignment and image quality.
Effect of KL Regularization
KL regularization in online RL is effective in attaining both high reward and aesthetic scores. We observe that RL model without KL regularization can generate lower-quality images (e.g., over-saturated colors and unnatural shapes). In the case of SFT, we find that the KL regularization can mitigate some failure modes of SFT without KL and improve aesthetic scores, but generally suffers from lower ImageReward
Reducing Bias in the Pre-trained Model
The original model, which is trained on large-scale datasets from the web, tends to produce whiskey-related images from “Four roses” due to the existence of a whiskey brand bearing the same name as the prompt. In contrast, RL fine-tuned model with ImageReward generates images associated with the flower “rose”.
Fine-tuning on Complex and Long Text Prompt
Bibtex
@article{2023DPOK,
title={DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models},
author={Fan, Ying and Watkins, Olivia and Du, Yuqing and Liu, Hao and Ryu, Moonkyung and Boutilier, Craig and Abbeel, Pieter and Ghavamzadeh, Mohammad and Lee, Kangwook and Lee, Kimin},
journal={arXiv preprint arXiv:2305.16381},
year={2023}
}