CVPR 2018 Tutorial on GANs
Salt Lake City, USA
June 22nd, 2018, Friday
Location: Room 355 DEF, Calvin L. Rampton Salt Palace Convention Center in Salt Lake City, Utah
Generative adversarial networks (GANs) have been at the forefront of research on generative models in the last couple of years. GANs have been used for image generation, image processing, image synthesis from captions, image editing, visual domain adaptation, data generation for visual recognition, and many other applications, often leading to state of the art results.
This tutorial aims to provide a broad overview of generative adversarial networks, mainly including the following three parts:
- Theoretical foundations such as basic concepts, mechanisms, and theoretical considerations
- Best practices of the current state-of-the-art GAN and conditional GAN models, including network architectures, objective functions, and other training tricks
- Computer vision applications including visual domain adaptation, image processing (e.g., restoration, inpainting, super-resolution), image synthesis and manipulation, video prediction and generation, and synthetic data generation for visual recognition
For an example of the power of the generative adversarial networks, see the below result of generating high resolution realistic portraits and driving dash cam images.
Tutorial recordings
Schedule
10:00 Unpaired Image-to-Image Translation with CycleGAN [slides] [pdf] [video]
Taesung Park, UC Berkeley and Jun-Yan Zhu, MIT
10:30 Coffee Break
11:00 Can GANs actually learn the distribution? Some obstacles [slides] [video]
Sanjeev Arora, Princeton
11:45 Learning Disentangled Representations with an Adversarial Loss [slides] [video]
Emily Denton, NYU
12:15 Lunch break
15:00 Coffee break
Organizers
This tutorial was organized by Jun-Yan Zhu, Taesung Park, Mihaela Rosca, Phillip Isola, and Ian Goodfellow.
Contact the organizers at cvpr2018gantutorial@googlegroups.com