Color Enhancement using Deep Reinforcement Learning
Jongchan Park(Lunit Inc), Joon-Young Lee(Adobe Research), Donggeun Yoo(Lunit Inc, KAIST), In So Kweon(KAIST)
Learning-based color enhancement approaches typically learn to map from input images to retouched images. Previous methods do not explicitly model step-by-step human retouching processes and usually require expensive pairs of input-retouched image pairs. In this paper, we present a deep reinforcement learning based method for color enhancement. We formulate a color enhancement process as a Markov Decision Process where actions are defined as global color adjustment operations and learn the optimal global enhancement sequence using deep reinforcement learning. In addition, we present a ‘distort-and-recover’ training scheme which only requires high quality reference images for training instead of input and retouched pair images. Given high-quality reference images, we distort the images’ color distribution and form distorted-reference image pairs for training. Through extensive experiments, we show that our method produces decent enhancement results compared to previous methods and our deep reinforcement learning approach is more suitable for the ‘distort-and-recover’ training scheme than previous supervised learning approaches.
- With 'Distort-and-Recover' color adjustment scheme, a large database of human-retouched images can be utilized for color enhancement. Also, a personalized retouching agent can be trained if a set of user-preferred images are given.
- We utilize various global color adjustment operations, to prevent noise or resolution restriction in the enhancement process. We experimentally showed that least distortion or noise to be made with such global operations, compared to other DL-based approaches.
- We utilize widely-used Deep Q-Network and its variants as the color enhancement agent.
- We evaluate our method with L2-error, but since L2-error is not an accurate measure of color enhancement, we conduct user-studies for a more fair evaluation.
- More details are to be released in CVPR 2018.