CVPR 2018 Tutorial on GANs

Salt Lake City, USA

June 22nd, 2018, Friday

Location: Room 355 DEF, Calvin L. Rampton Salt Palace Convention Center in Salt Lake City, Utah

Generative adversarial networks (GANs) have been at the forefront of research on generative models in the last couple of years. GANs have been used for image generation, image processing, image synthesis from captions, image editing, visual domain adaptation, data generation for visual recognition, and many other applications, often leading to state of the art results.

This tutorial aims to provide a broad overview of generative adversarial networks, mainly including the following three parts:

  • Theoretical foundations such as basic concepts, mechanisms, and theoretical considerations
  • Best practices of the current state-of-the-art GAN and conditional GAN models, including network architectures, objective functions, and other training tricks
  • Computer vision applications including visual domain adaptation, image processing (e.g., restoration, inpainting, super-resolution), image synthesis and manipulation, video prediction and generation, and synthetic data generation for visual recognition

For an example of the power of the generative adversarial networks, see the below result of generating high resolution realistic portraits and driving dash cam images.

Tutorial recordings

You can find the recording for the tutorial on Youtube:

  • Part 1: Ian Goodfellow, Phillip Isola, Taesung Park and Jun-Yan Zhu
  • Part 2: Sanjeev Arora and Emily Denton
  • Part 3: Mihaela Rosca, Ming-Yu Liu, and Judy Hoffman
  • Part 4: Xiaolong Wang, Stefano Ermon, Carl Vondrick, and Alexei A. Efros

Schedule

09:00 Introduction to Generative Adversarial Networks [slides] [video]

Ian Goodfellow, Google Brain

09:30 Paired Image-to-Image Translation [keynote] [pdf] [video]

Phillip Isola, MIT

10:00 Unpaired Image-to-Image Translation with CycleGAN [slides] [pdf] [video]

Taesung Park, UC Berkeley and Jun-Yan Zhu, MIT

10:30 Coffee Break

11:00 Can GANs actually learn the distribution? Some obstacles [slides] [video]

Sanjeev Arora, Princeton

11:45 Learning Disentangled Representations with an Adversarial Loss [slides] [video]

Emily Denton, NYU

12:15 Lunch break

13:30 VAE-GAN Hybrids [slides] [video]

Mihaela Rosca, DeepMind

14:00 Multimodal Unsupervised Image-to-Image Translation [slides] [video]

Ming-Yu Liu, NVIDIA

14:30 Adversarial Domain Adaptation [slides] [video]

Judy Hoffman, UC Berkeley

15:00 Coffee break

15:30 Adversaries for Detection and Action [slides] [pdf] [video]

Xiaolong Wang, CMU

16:00 Generative Adversarial Imitation Learning [slides] [video]

Stefano Ermon, Stanford

16:30 Video Generation and Prediction [slides] [video]

Carl Vondrick, Columbia University

17:00 Reasons to Love GANs [slides] [video]

Alexei A. Efros, UC Berkeley

Organizers

This tutorial was organized by Jun-Yan Zhu, Taesung Park, Mihaela Rosca, Phillip Isola, and Ian Goodfellow.

Contact the organizers at cvpr2018gantutorial@googlegroups.com