About


Deep learning led to a significant breakthrough in many applications in computer vision and machine learning. However, only little is known about the theory behind this successful paradigm. This workshop will discuss the recent achievements with respect to the theoretical understanding of deep networks.  


ICML 2018

The theory of deep learning 2018 workshop will be held as a part of the 35th International Conference on Machine Learning (ICML), at Stockholmsmässan, Stockholm, Sweden. Please check the main conference website for information about registration, schedule, venue, and travel arrangements.


Location

Victoria room at Stockholmsmässan, Stockholm, Sweden  


Invited Speakers



Important dates 



Call for Extended Abstract Submission

  • We welcome submission of extended abstracts of recent works on deep learning theory
  • Accepted works will be presented in the workshop either in the poster session or as one of the contributed talks. 
  • No proceedings will be published as part of this workshop.
  • Submission website is now open.


Submission Instructions

  • Submissions should be non-anonymized short papers up to 2 pages (including references) in PDF format using this template.
  • Adding a reference in the abstract to an extended version of the work that contains more details is very recommended. 
  • Submissions are handled through the CMT system. Please note that at least one coauthor of each accepted paper will be expected to attend the workshop in person to present a poster or give a contributed talk.



Accepted Papers

  • An explicit expression for the global minimizer network
  • Deep Neural Networks Learn Non-Smooth Functions Effectively
  • Understanding Deep Neural Networks with Renyi’s α-entropy Functional
  • Depth Efficiency of Deep Mixture Models and Sum-Product Networks using Tensor Analysis 
  • Universal approximations of invariant maps by neural networks
  • Information based regularization for deep learning
  • Loss-Calibrated Approximate Inference in Bayesian Neural Networks
  • A Compressed Sensing View of Unsupervised Text Embeddings, Bag-of-n-Grams, and LSTMs
  • Difficulties in Optimising Feedforward Neural Networks
  • Homotopic deep recurrent neural networks for approximating meta-heuristics
  • On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks

Organizers

                   Rene Vidal, Johns Hopkins University                                                                                            

          Rene Vidal                         Joan Bruna                       Raja Giryes
Johns Hopkins University
       New York 
University
           Tel Aviv University