Continuous Conditional GANs with Generator Regularization
Speaker: Yunkai Zhang
Speaker: Yunkai Zhang
Abstract:Â
Conditional Generative Adversarial Networks are known to be difficult to train, especially when the conditions are continuous and high-dimensional. In this talk, we propose to partially alleviate this difficulty with a simple generator regularization term on the GAN generator loss in the form of a Lipschitz penalty. The intuition of this Lipschitz penalty is that, when the generator is fed with neighboring conditions in the continuous space, the regularization term will leverage the neighbor information and push the generator to generate samples that have similar conditional distributions for neighboring conditions. We will then analyze the effect of the proposed regularization term and demonstrate its robust performance on a range of synthetic tasks as well as real-world conditional time series generation tasks.