Axel Sauer

Kashyap Chitta

Jens Müller

Andreas Geiger

The GIF is generated by a Projected GAN trained on 4 images.

Projected GANs Converge Faster

[PDF] [Supplementary] [Talk] [Code]

NeurIPS 2021

TL;DR: Training GANs in pretrained feature spaces improves image quality, training speed, and sample efficiency.

Generative Adversarial Networks (GANs) produce high-quality images but are challenging to train. They need careful regularization, vast amounts of compute, and expensive hyper-parameter sweeps. We make significant headway on these issues by projecting generated and real samples into a fixed, pretrained feature space. Motivated by the finding that the discriminator cannot fully exploit features from deeper layers of the pretrained model, we propose a more effective strategy that mixes features across channels and resolutions. Our Projected GAN improves image quality, sample efficiency, and convergence speed. It is further compatible with resolutions of up to one Megapixel and advances the state-of-the-art Fréchet Inception Distance (FID) on twenty-two benchmark datasets. Importantly, Projected GANs match the previously lowest FIDs up to 40 times faster, cutting the wall-clock time from 5 days to less than 3 hours given the same computational resources.

Latent Interpolations

Oxford Flowers (1.3k images)


Pokemon (833 images)


Landscapes (4k images)


LSUN Bedroom (1.3M images)


AFHQ Dog (5k images)


AFHQ Wild (5k images)