About
ICML 2018
The theory of deep learning 2018 workshop will be held as a part of the 35th International Conference on Machine Learning (ICML), at Stockholmsmässan, Stockholm, Sweden. Please check the main conference website for information about registration, schedule, venue, and travel arrangements.
Location
Victoria room at Stockholmsmässan, Stockholm, Sweden
Invited Speakers
- Stefano Soatto, UCLA
- Michael Elad, Technion - IIT
- Gitta Kutyniok, TU-Berlin
- Julien Mairal, Inria Grenoble
- Raman Arora, Johns Hopkins University
- Jason Lee, USC
Important dates
- Extended abstract submission - May 1, 2018 (23:59 anywhere on earth)
- Notification to authors - May 15, 2018
- Workshop date - July 14, 2018
Call for Extended Abstract Submission
- We welcome submission of extended abstracts of recent works on deep learning theory.
- Accepted works will be presented in the workshop either in the poster session or as one of the contributed talks.
- No proceedings will be published as part of this workshop.
- Submission website is now open.
Submission Instructions
- Submissions should be non-anonymized short papers up to 2 pages (including references) in PDF format using this template.
- Adding a reference in the abstract to an extended version of the work that contains more details is very recommended.
- Submissions are handled through the CMT system. Please note that at least one coauthor of each accepted paper will be expected to attend the workshop in person to present a poster or give a contributed talk.
Accepted Papers
- An explicit expression for the global minimizer network
- Deep Neural Networks Learn Non-Smooth Functions Effectively
- Understanding Deep Neural Networks with Renyi’s α-entropy Functional
- Depth Efficiency of Deep Mixture Models and Sum-Product Networks using Tensor Analysis
- Universal approximations of invariant maps by neural networks
- Information based regularization for deep learning
- Loss-Calibrated Approximate Inference in Bayesian Neural Networks
- A Compressed Sensing View of Unsupervised Text Embeddings, Bag-of-n-Grams, and LSTMs
- Difficulties in Optimising Feedforward Neural Networks
- Homotopic deep recurrent neural networks for approximating meta-heuristics
- On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks
Organizers