Science meets Engineering of Deep Learning

Goal of SEDL

We anticipate that the transition from experimental mystery to rigorous resolution will occur in multiple stages, one of which should involve bringing together diverse groups working toward seemingly different goals. While on the surface, the goals of the practitioner and theoretician may not appear to be aligned, a collaboration between the two has great potential to further both agendas in the long run. The goal of this workshop is to support the transition to such collaboration.


Workshop Schedule

The workshop will take place on Sat Dec 14th 2019, room West 121 + 122 in Canada Place, Vancouver.


08:00 - 08:15  Welcoming remarks and introduction

08:15 - 09:45  Session 1 - Theory
               08:15-08:35 Surya Ganguli An analytic theory of generalization dynamics and transfer learning in deep linear networks
               08:35-08:55 Yasaman Bahri Tractable limits for deep networks: an overview of the large width regime
               08:55-09:15 Florent Krzakala Learning with "realistic" synthetic data
               09:15-09:45 Theory Panel Discussion: Surya Ganguli, Yasaman Bahri, Florent Krzakala
               Moderator: Lenka Zdeborova
09:45 - 10:30  Coffee break and posters
10:30 - 12:00  Session 2 - Vision
               10:30-10:50 Carl Doersch Self-supervised visual representation learning: putting patches into context               
               10:50-11:10 Raquel Urtasun Science and Engineering for Self-driving
               11:10-11:30 Sanja Fidler TBA
               11:30-12:00 Vision Panel Discussion: Raquel Urtasun,Carl Doersch, Sanja Fidler
               Moderator: Natalia Neverova
12:00 - 14:00  Lunch break and posters
14:00 - 15:30  Session 3 - Further Applications
               14:00-14:20 Douwe Kiela Benchmarking Progress in AI: A New Benchmark for Natural Language Understanding
               14:20-14:40 Audrey Durand Trading off theory and practice: A bandit perspective
               14:40-15:00 Kamalika Chaudhuri A Three Sample Test to Detect Data Copying in Generative Models
               15:00-15:30 Further Applications Panel Discussion: Audrey Durand, Douwe Kiela, Kamalika Chaudhuri
               Moderator: Yann Dauphin
15:30 - 16:15  Coffee break and posters
16:15 - 17:10  Panel - The Role of Communication at Large
               Aparna Lakshmiratan, Jason Yosinski, Been Kim, Surya Ganguli, Finale Doshi-Velez
               Moderator: Zack Lipton
17:10 - 18:00  Contributed Session - Spotlight Submissions
               17:10 - 17:20 Non-Gaussian Processes and Neural Networks at Finite Widths, Sho Yaida (Facebook AI Research)
               17:20 - 17:30 Training Batchnorm and Only Batchnorm, Jonathan Frankle (MIT); David J Schwab (ITS, CUNY Graduate Center); Ari S Morcos (Facebook AI Research (FAIR))
               17:30 - 17:40 Asymptotics of Wide Networks from Feynman Diagrams, Guy Gur-Ari (Google); Ethan Dyer (Google)
               17:40 - 17:50 Fantastic Generalization Measures and Where to Find Them, YiDing Jiang (Google); Behnam Neyshabur (Google); Dilip Krishnan (Google); Hossein Mobahi (Google Research); Samy Bengio (Google Research, Brain Team)
               17:50 - 18:00 Complex Transformer: A Framework for Modeling Complex-Valued Sequence, Martin Ma (Carnegie Mellon University); Muqiao Yang (Carnegie Mellon University); Dongyu Li (Carnegie Mellon University); Yao-Hung Tsai (Carnegie Mellon University); Ruslan Salakhutdinov (Carnegie Mellon University)


Contributed Session - Spotlight Submissions

  • Complex Transformer: A Framework for Modeling Complex-Valued Sequence, Martin Ma (Carnegie Mellon University); Muqiao Yang (Carnegie Mellon University); Dongyu Li (Carnegie Mellon University); Yao-Hung Tsai (Carnegie Mellon University); Ruslan Salakhutdinov (Carnegie Mellon University)
  • Non-Gaussian Processes and Neural Networks at Finite Widths, Sho Yaida (Facebook AI Research)
  • Asymptotics of Wide Networks from Feynman Diagrams, Guy Gur-Ari (Google); Ethan Dyer (Google)
  • Fantastic Generalization Measures and Where to Find Them, YiDing Jiang (Google); Behnam Neyshabur (Google); Dilip Krishnan (Google); Hossein Mobahi (Google Research); Samy Bengio (Google Research, Brain Team)
  • Training Batchnorm and Only Batchnorm, Jonathan Frankle (MIT); David J Schwab (ITS, CUNY Graduate Center); Ari S Morcos (Facebook AI Research (FAIR))

Contributed talks abstract can be found here.


Contributed Posters and Reviewers

A detailed list of contributed posters can be found here.

We would like to thank our reviewers who helped us to choose the papers for our workshop.


Advisors

  • Theory Session advisors: Joan Bruna, Adji Bousso Dieng
  • Vision Session advisors: Ilija Radosavovic, Riza Alp Guler
  • Further Applications Session advisors: Dilan Gorur, Orhan Firat
  • Panel advisors: Michela Paganini, Anima Anandkumar