MaLGa & University of Genoa, Italy
Machine Learning for Inverse Problems
Description: The focus of this course will be the use of machine learning methods for solving inverse problems. In the first part, we will discuss inverse problems, with a particular emphasis on imaging problems, as well as the classical approaches based on regularization, together with their limitations. In the second part, we will show how machine learning, and in particular deep learning, can be used to leverage prior information available through data. Possible approaches include end-to-end reconstructions and learned regularization (typically supervised), generative models (unsupervised), and untrained networks as in deep image prior. The theoretical discussions will be complemented by a lab session, mostly focusing on the comparison between traditional and deep learning methods.
Norwegian University of Science and Technology (NTNU), Norway
Deep learning from the point of view of numerical analysis
Description: Deep learning neural networks have recently been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. A large amount of progress made in deep learning has been based on heuristic explorations, but there is a growing effort to mathematically understand the structure in existing deep learning methods and to design new approaches preserving (geometric) structure in neural networks. The (discrete) optimal control point of view to neural networks offers an interpretation of deep learning from a numerical analysis perspective and opens the way to mathematical insight [10, 9, 2]. We discuss a number of interesting directions of current and future research in structure preserving deep learning [3]. Some deep neural networks can be designed to have desirable properties such as invertibility and group equivariance or can be adapted to problems of manifold value data. Equivariant neural networks are effective in reducing the amount of data for solving certain imaging problems [4]. We show how classical results of stability of ODEs are useful to construct contractive neural networks architectures. Thus, neural networks can be designed with guaranteed stability properties. This can be used to ensure robustness against adversarial attacks and to obtain converging “Plug-and-Play” algorithms for inverse problems in imaging [3, 7, 12]. We consider extensions of these ideas to the manifold valued case and we discuss B-stability on manifolds [1]. We also consider applications of deep learning to mechanical systems, for learning Hamiltonians on manifolds and from noisy data [6, 11] and for learning PDE solutions [8]. We show how similar ideas can be used to compute optimal parametrisations in shape analysis [5].
Part 1: introduction, deep learning as optimal control, dynamical systems and deep neural networks. Equivariant neural networks.
Part 2: Adversarial attacks, stability of ODEs and applications to 1-Lipschitz networks and converging “Plug-and-Play” algorithms for imaging. B-stability on manifolds and applications.
Part 3: Deep learning of diffeomorphisms for optimal shape reparametrization. Applications of deep learning to mechanical systems.
Part 4: Learning Hamiltonians on manifolds, from noisy data and learning PDEs form pixel data.
References are available here.
University of Pisa, Italy
Randomized matrix computations: themes and variations
Description: This short course explores several ways in which probability can be used to design algorithms in numerical linear algebra. Each design template is illustrated by its application to several computational problems. We will cover a range of topics, including stochastic trace estimation, linear algebra algorithms with random initialization, and randomized subspace embeddings for dimensionality reduction. Special attention will be given to randomized methods for low-rank approximation of symmetric and non-symmetric matrices, with emphasis on the theoretical guarantees of these algorithms. In the hands-on lab session, participants will implement selected algorithms and explore their performance in practice.
University of Novi Sad, Serbia
First order methods in stochastic optimization
Description: The course will cover some of the basic concepts of stochastic optimization. In the first part of the course, some state-of-the-art stochastic gradient and gradient-like methods will be discussed. The second part of the course will be dedicated to some novel contributions in stochastic gradient-like methods, emphasizing benefits and challenges of introducing spectral coefficients into the stochastic framework. The following exercises will consist of implementing some of the abovementioned methods in MatLab, as well as testing the influence of different parameters on the methods’ behavior and discussing the obtained results.
Who: This only applies to those PhD students who are required by their PhD program to pass a final assessment in order to get recognition of this activity (i.e., formal recognition of 6 ECTS). All other participants who are not PhD students and/or do not need to get formal recognition for registering credits, will receive a certificate of attendance upon request.
What: The final assessment consists of an individual written report of maximum 5 pages (including references and figures). The report should focus on one of the topics presented by the lecturers and discuss how this is relevant, and could potentially give a contribution to the research questions of ones' PhD project. In particular, it should be discussed how the methodology proposed is feasible and appropriate, and the anticipated benefits in attending this event to enable advancements towards ones' PhD project. To take the final assessment it is required to (actively) attend at least 80% of the activities.
When: The report must be submitted as an email attachment sent to the organizers of the event (contact emails) The deadline for returning the report is fixed 1 week after the end of the event: 31 January at 23:59 CET. The final exam grades will be realised at the latest 3 weeks after.