[Speaker] Takashi Mori ( RIKEN )

[Date] June 22th, 16:00 - (Jpn time)

[Style] Webinar by Zoom

[Title] Theoretical Challenges in Deep Learning

Abstract:

Deep learning has achieved an unparalleled success in various tasks of artificial intelligence such as image classification and speech recognition. Remarkably, in modern machine learning applications, impressive generalization performance has been observed in an overparameterized regime, in which the number of parameters in the network is much larger than that of training data samples. Contrary to what we learn in the classical learning theory, an overparameterized network fits random labels and yet generalizes very well without serious overfitting. We do not have general theory that explains why deep learning works so well.


In my talk, I review recent theoretical findings and remaining fundamental issues in deep learning, and then talk about our attempt [1] to understand why and when deep learning is so powerful.


[1] Takashi Mori and Masahito Ueda, arXiv:2005.12488.