Diversity-Sensitive Conditional Generative Adversarial Networks

Dingdong Yang†, Seunghoon Hong†, Yunseok Jang†, Tianchen Zhao†, Honglak Lee†‡

†: University of Michigan, ‡: Google Brain

Abstract:

We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN). Although conditional distributions are multi-modal (i.e., having many modes) in practice, most cGAN approaches tend to learn an overly simplified distribution where an input is always mapped to a single output regardless of variations in latent code. To address such issue, we propose to explicitly regularize the generator to produce diverse outputs depending on latent codes. The proposed regularization is simple, general, and can be easily integrated into most conditional GAN objectives. Additionally, explicit regularization on generator allows our method to control a balance between visual quality and diversity. We demonstrate the effectiveness of our method on three conditional generation tasks: image-to-image translation, image inpainting, and future video prediction. We show that simple addition of our regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task.

Paper:

Diversity-Sensitive Conditional Generative Adversarial Networks

Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tianchen Zhao, and Honglak Lee

in ICLR 2019

[PDF] [Code] [arXiv]

Results:

1. Interpolation on Image-to-Image Translation Task

(a) Edge->Image

(b) Label->Image

(c) Map->Image

(d) Cityscape dataset

2. Interpolation on Image Inpainting Task

3. Comparisons on Stochastic Video Prediction Task

(1) KTH human action dataset

(2) BAIR robot arm dataset