About

Neural networks were originally introduced in 1943 by McCulloch and Pitts as an approach to develop learning algorithms by mimicking the human brain. The key goal at that time was the introduction of a theory of artificial intelligence. However, the limited amount of data and the lack of high performance computers made the training of deep neural networks, i.e., networks with many layers, unfeasible.

Today, massive amounts of training data are available complemented by a tremendously increased computing power, allowing for the first time the application of deep learning algorithms. It is for this reason that deep neural networks have recently seen an impressive comeback. Spectacular applications of deep learning are AlphaGo, which for the first time enabled a computer to beat the top world players in the game Go - a game by far more complex than chess-, or the speech recognition systems available on each smart phone these days; just to name a few. But even more so, we currently witness how algorithms based on deep neural networks are infusing numerous aspects of the public sector such as being used for prescreening job applications or revolutionizing the healthcare industry. In fact, the U.S. Food and Drug Administration (FDA) has already approved the marketing of the first medical device for detecting diabetic retinopathy which is based on such methodologies.

A similarly strong impact can be observed on science itself. Deep learning based approaches have proven very useful within certain problem settings, in particular, for solving ill-posed inverse problems, predominantly from imaging, sometimes already leading to state-of-the-art algorithms. Lately also more and more successes for solving partial differential equations have been reported. Typically, best performances can be observed when model-based approaches are combined with - and not entirely replaced by - deep learning methods. It is generally believed that we currently witness a substantial paradigm change in the entire field of mathematical methodologies, hence, in particular, in applied mathematics.

However, most of the related research is still empirically driven and a sound theoretical foundation is to large parts missing. This is not only a tremendous problem from a scientic viewpoint, but particularly critical for sensitive applications such as in the health care sector. Thus there exists a tremendous need for mathematics of deep learning.

Aiming to derive a mathematical foundation of deep learning, this lecture series provides an introduction to the main mathematical questions and concepts of deep neural networks and their training within two realms:

  • Theoretical foundations of deep learning independent of a particular application

  • Theoretical analysis of the potential and the limitations of deep learning for mathematical methodologies, in particular, for inverse problems and partial differential equations.

Organisers

University of Cambridge

University of Cambridge

Local organising committee


Marcello Carioni

University of Cambridge




Subhadip Mukherjee

University of Cambridge



Supporters

This lecture series is part of the London Mathematical Society's invited lecture series, who is also the main supporter of this event.