Date


November 24, 2016


Date


Room 101BTaipei International Convention Center (TICC), Taiwan


Description


In the past five years there have seen a dramatic increase in the performance of recognition systems due to the introduction of deep neural networks for feature learning and classification. However, the theoretical foundation for this success remain elusive. This tutorial will present some of the theoretical results developed for deep neural networks that aim to provide a mathematical justification for properties such as the approximation capabilities, convergence, global optimality, invariance, stability of the learned representations, generalization error, etc. In addition, it will discuss the implication of the developed theory on practical training of neural networks. 


The tutorial will start with the theory for neural networks from the early 90s (including the well-known results of Hornik et. al. and Cybenko). Then it will move to the recent theoretical findings established for deep learning in the past five year. The practical considerations that follow from the theory will be also discussed.


Content 

Tutorial Slides


Schedule 


This tutorial will be a 3 hours morning tutorial

8:45 - 9:30       History and Introduction to Deep Learning
9:30 - 10:30     Existing theory for deep learning 
10:30-11:00     Coffee break
11:00-12:30    Data structure based theory for deep learning




Speaker


Raja Giryes, Tel Aviv University  




Previous Tutorials