Science meets Engineering of Deep Learning

Goal of SEDL

We anticipate that the transition from experimental mystery to rigorous resolution will occur in multiple stages, one of which should involve bringing together diverse groups working toward seemingly different goals. While on the surface, the goals of the practitioner and theoretician may not appear to be aligned, a collaboration between the two has great potential to further both agendas in the long run. The goal of this workshop is to support the transition to such collaboration.


Workshop Schedule

The workshop will take place on Sat Dec 14th 2019 in Vancouver.


08:00 - 08:15  Welcoming remarks and introduction
08:15 - 09:45  Session 1 - Theory
               Florent Krzakala, Yasaman Bahri, Surya Ganguli
               Moderator: Lenka Zdeborova
09:45 - 10:30  Coffee break and posters
10:30 - 12:00  Session 2 - Vision
               Carl Doersch, Raquel Urtasun, Sanja Fidler
               Moderator: Natalia Neverova
12:00 - 14:00  Lunch break and posters
14:00 - 15:30  Session 3 - Further Applications
               Audrey Durand, Douwe Kiela, Kamalika Chaudhuri
               Moderator: Yann Dauphin
15:30 - 16:15  Coffee break and posters
16:15 - 17:10  Panel - The Role of Communication at Large
               Aparna Lakshmiratan, Jason Yosinski, Been Kim, Surya Ganguli, Finale Doshi-Velez
               Moderator: Zack Lipton
17:10 - 18:00  Contributed Session - Spotlight Submissions


Contributed Session - Spotlight Submissions

  • Complex Transformer: A Framework for Modeling Complex-Valued Sequence, Martin Ma (Carnegie Mellon University); Muqiao Yang (Carnegie Mellon University); Dongyu Li (Carnegie Mellon University); Yao-Hung Tsai (Carnegie Mellon University); Ruslan Salakhutdinov (Carnegie Mellon University)
  • Non-Gaussian Processes and Neural Networks at Finite Widths, Sho Yaida (Facebook AI Research)
  • Asymptotics of Wide Networks from Feynman Diagrams, Guy Gur-Ari (Google); Ethan Dyer (Google)
  • Fantastic Generalization Measures and Where to Find Them, YiDing Jiang (Google); Behnam Neyshabur (Google); Dilip Krishnan (Google); Hossein Mobahi (Google Research); Samy Bengio (Google Research, Brain Team)
  • Training Batchnorm and Only Batchnorm, Jonathan Frankle (MIT); David J Schwab (ITS, CUNY Graduate Center); Ari S Morcos (Facebook AI Research (FAIR))

Contributed talks abstract can be found here.


Contributed Posters and Reviewers

A detailed list of contributed posters can be found here.

We would like to thank our reviewers who helped us to choose the papers for our workshop.


Advisors

  • Theory Session advisors: Joan Bruna, Adji Bousso Dieng
  • Vision Session advisors: Ilija Radosavovic, Riza Alp Guler
  • Further Applications Session advisors: Dilan Gorur, Orhan Firat
  • Panel advisors: Michela Paganini, Anima Anandkumar