(new!) Do not miss the Facebook Live streaming of the workshop! Introduction
In adversarial training, a set of machines learn together by pursuing competing goals. For instance, in Generative Adversarial Networks or GANs  a generator function learns to synthesize samples that best resemble some dataset, while a discriminator function learns to distinguish between samples drawn from the dataset and samples synthesized by the generator. GANs have emerged as a promising framework for unsupervised learning: GAN generators are able to produce images of unprecedented visual quality, while GAN discriminators learn features with rich semantics that lead to state-of-the-art semi-supervised learning . From a conceptual perspective, adversarial training is fascinating because it bypasses the need of loss functions in learning, and opens the door to new ways of regularizing (as well as fooling or attacking) learning machines. In this one-day workshop, we invite scientists and practitioners interested in adversarial training to gather, discuss, and establish new research collaborations. The workshop will feature invited talks, a panel discussion, and contributed spotlights and posters.
Among the research topics to be addressed by the workshop are
Call for papers
Click here to submit your paper. (The deadline has passed)
This workshop is collocated with NIPS 2016, and it will take place on Friday 9th of December 2016.
09:00-09:30: Set up posters and welcome.
09:30-10:00: Ian Goodfellow (OpenAI) on Introduction to Generative Adversarial Networks 10:00-10:30: Soumith Chintala (Facebook AI Research) on How to train a GAN? 10:30-11:00: Coffee break 11:00-11:30: Arthur Gretton (University College London) on Learning features to compare distributions 11:30-12:00: Sebastian Nowozin (MSR) Training Generative Neural Samplers using Variational Divergence Minimization 12:00-14:00: Lunch break 14:00-14:30: Aaron Courville (University of Montreal) on Adversarially Learned Inference (ALI) and BiGANs 14:30-15:00: Yann LeCun (Facebook AI Research) on Energy-Based Adversarial Training and Video Prediction
15:00-16:00: Panel Discussion with I. Goodfellow, S. Chintala, A. Gretton, S. Nowozin, A. Courville, Y. LeCun, and E. Denton.
16:00-16:30: Coffee break 16:30-17:30: Contributed spotlights and posters. The schedule of 4-minute spotlights is:
17:30-: Additional poster and open discussions.
 Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. NIPS, 2015.
 Jeff Donahue, Philipp Krähenbühl, Trevor Darrell. Adversarial Feature Learning. arXiv, 2016.
 Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville. Adversarially Learned Inference. arXiv, 2016.
 Gintare Karolina Dziugaite, Daniel M. Roy, Zoubin Ghahramani. Training generative neural networks via Maximum Mean Discrepancy optimization. arXiv, 2015.
 Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative Adversarial Networks. NIPS, 2014.
 Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. Explaining and Harnessing Adversarial Examples. ICLR, 2015.
 Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, Alexander Smola. A Kernel Two-Sample Test. JMLR, 2012.
 Diederik P Kingma, Max Welling. Auto-Encoding Variational Bayes. ICLR, 2014.
 Yujia Li, Kevin Swersky, Richard Zemel. Generative Moment Matching Networks. arXiv, 2015.
 Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Brendan Frey. Adversarial Autoencoders. ICLR, 2016.
 Michael Mathieu, Camille Couprie, Yann LeCun. Deep multi-scale video prediction beyond mean square error. ICLR, 2016.
 Mehdi Mirza, Simon Osindero. Conditional Generative Adversarial Nets. arXiv, 2014.
 Sebastian Nowozin, Botond Cseke, Ryota Tomioka. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. arXiv, 2016.
 Alec Radford, Luke Metz, Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. ICLR, 2016.
 Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee. Generative Adversarial Text to Image Synthesis. ICML, 2016.
 Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen. Improved Techniques for Training GANs. arXiv, 2016.
Although a full discussion of these works is not possible on this page, please note that Generative Adversarial Networks belong to a broader family of works describing ways to achieve unsupervised learning in neural networks. See for instance the 1990s works of Ralf Linsker, Terry Sanger, Barak Pearlmutter, Jurgen Schmidhuber, Sue Becker, Rich Zemel, Mike Mozer, Geoff Hinton, Christian Jutten, and Jeanny Herault, among others.
David Lopez-Paz (Facebook AI Research)
Alec Radford (OpenAI)
Léon Bottou (Facebook AI Research)