GAN Improvement with Adversarial Code Learning

Joint Code Learning in Latent Space [Paper]

We introduce the "adversarial code learning" (ACL) module that improves overall image generation performance to several types of deep models. Instead of performing a posterior distribution modeling in the pixel spaces of generators, ACLs aim to jointly learn a latent code with another image encoder/inference net, with a prior noise as its input. ACL is a portable module that brings up much more flexibility and possibilities in generative model designs. First, it allows flexibility to convert non-generative models like Autoencoders and standard classification models to decent generative models. Second, it enhances existing GANs' performance by generating meaningful codes and images from any part of the prior. 

We have incorporated our ACL module with the aforementioned frameworks and have performed experiments on synthetic, MNIST, CIFAR-10, and CelebA datasets. Our models have achieved significant improvements which demonstrated the generality for image generation tasks.

Face Manipulation with ACL-GAN

The proposed ACL-GAN not only generate more realistic images, it is able to plot interpolations between two real faces. While it is not an option with the conventional GANs. The is all because the ACL module performs a mapping between input noise and the real image embeddings. The aligned latent space has multiple advantages that we have discussed above.