NIPS 2016 Workshop on Adversarial Training

Arm wrestling machines

©2014-2016 Ociacia

In adversarial training, a set of machines learn together by pursuing competing goals. For instance, in Generative Adversarial Networks or GANs [5] a generator function learns to synthesize samples that best resemble some dataset, while a discriminator function learns to distinguish between samples drawn from the dataset and samples synthesized by the generator. GANs have emerged as a promising framework for unsupervised learning: GAN generators are able to produce images of unprecedented visual quality, while GAN discriminators learn features with rich semantics that lead to state-of-the-art semi-supervised learning [14]. From a conceptual perspective, adversarial training is fascinating because it bypasses the need of loss functions in learning, and opens the door to new ways of regularizing (as well as fooling or attacking) learning machines. In this one-day workshop, we invite scientists and practitioners interested in adversarial training to gather, discuss, and establish new research collaborations. The workshop will feature invited talks, a panel discussion, and contributed spotlights and posters.

Among the research topics to be addressed by the workshop are
  • Novel theoretical insights on adversarial training
  • New methods and stability improvements for adversarial optimization
  • Adversarial training as a proxy to unsupervised learning of representations
  • Regularization and attack schemes based on adversarial perturbations
  • Adversarial model evaluation
  • Adversarial inference models
  • Novel applications of adversarial training

Call for papers

Click here to submit your paper. (The deadline has passed)
  • We welcome anonymous, NIPS-formatted submissions.
  • All accepted submissions will be presented as posters.
  • All posters must be portrait 36 x 48 in. (91cm x 122cm).
  • Selected accepted submissions will be awarded a spotlight presentation.
  • Everyone is encouraged to submit!

This workshop is collocated with NIPS 2016, and it will take place on Friday 9th of December 2016.

09:00-09:30: Set up posters and welcome.

09:30-10:00: Ian Goodfellow (OpenAI) on Introduction to Generative Adversarial Networks 10:00-10:30: Soumith Chintala (Facebook AI Research) on How to train a GAN? 10:30-11:00: Coffee break 11:00-11:30: Arthur Gretton (University College London) on Learning features to compare distributions 11:30-12:00: Sebastian Nowozin (MSR) Training Generative Neural Samplers using Variational Divergence Minimization 12:00-14:00: Lunch break 14:00-14:30: Aaron Courville (University of Montreal) on Adversarially Learned Inference (ALI) and BiGANs 14:30-15:00: Yann LeCun (Facebook AI Research) on Energy-Based Adversarial Training and Video Prediction
15:00-16:00: Panel Discussion with I. Goodfellow, S. Chintala, A. Gretton, S. Nowozin, A. Courville, Y. LeCun, and E. Denton.
16:00-16:30: Coffee break 16:30-17:30: Contributed spotlights and posters. The schedule of 4-minute spotlights is:
  1. Pfau and Vinyals. Connecting Generative Adversarial Networks and Actor-Critic Methods
  2. Mohamed and Lakshminarayanan. Learning in Implicit Generative Models
  3. Finn, Christiano, Abbeel and Levine. A Connection Between GANs, Inverse Reinforcement Learning, and Energy-Based Models
  4. Perarnau, Van De Weijer, Raducanu and Álvarez. Invertible Conditional GANs for image editing
  5. Odena, Olah and Shlens. Conditional Image Synthesis with Auxiliary Classifier GANs
  6. Metz, Poole, Pfau and Sohl-Dickstein. Unrolled Generative Adversarial Networks
  7. Luc, Couprie, Chintala and Verbeek. Semantic Segmentation using Adversarial Networks
  8. Arici and Celikyilmaz. Associative Adversarial Networks
  9. Narodytska and Kasiviswanathan. Simple Black-Box Adversarial Perturbations for Deep Networks
  10. Tabacof, Tavares and Valle. Adversarial Images for Variational Autoencoders
  11. Wu, Burda, Salakhutdinov and Grosse. On the Quantitative Analysis of Decoder-Based Generative Models
  12. Miyato, Dai and Goodfellow. Adversarial Training Methods for Semi-Supervised Text Classification
17:30-: Additional poster and open discussions.
    Recent References

    [1] Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. NIPS, 2015.
    [2] Jeff Donahue, Philipp Krähenbühl, Trevor Darrell. Adversarial Feature Learning. arXiv, 2016.
    [3] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville. Adversarially Learned Inference. arXiv, 2016.
    [4] Gintare Karolina Dziugaite, Daniel M. Roy, Zoubin Ghahramani. Training generative neural networks via Maximum Mean Discrepancy optimization. arXiv, 2015.
    [5] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative Adversarial Networks. NIPS, 2014.
    [6] Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. Explaining and Harnessing Adversarial Examples. ICLR, 2015.
    [7] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, Alexander Smola. A Kernel Two-Sample Test. JMLR, 2012.
    [8] Diederik P Kingma, Max Welling. Auto-Encoding Variational Bayes. ICLR, 2014.
    [9] Yujia Li, Kevin Swersky, Richard Zemel. Generative Moment Matching Networks. arXiv, 2015.
    [10] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Brendan Frey. Adversarial Autoencoders. ICLR, 2016.
    [11] Michael Mathieu, Camille Couprie, Yann LeCun. Deep multi-scale video prediction beyond mean square error. ICLR, 2016.
    [12] Mehdi Mirza, Simon Osindero. Conditional Generative Adversarial Nets. arXiv, 2014.
    [13] Sebastian Nowozin, Botond Cseke, Ryota Tomioka. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. arXiv, 2016.
    [15] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee. Generative Adversarial Text to Image Synthesis. ICML, 2016.
    [16] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen. Improved Techniques for Training GANs. arXiv, 2016.


    Although a full discussion of these works is not possible on this page, please note that Generative Adversarial Networks belong to a broader family of works describing ways to achieve unsupervised learning in neural networks. See for instance the 1990s works of Ralf Linsker, Terry Sanger, Barak Pearlmutter, Jurgen Schmidhuber, Sue Becker, Rich Zemel, Mike Mozer, Geoff Hinton, Christian Jutten, and Jeanny Herault, among others.

    David Lopez-Paz (Facebook AI Research)
    Alec Radford (OpenAI)
    Léon Bottou (Facebook AI Research)

    Subpages (1): Accepted Papers