The meeting will host the following four minicourses:

Overview of numerical methods in optimal transport. L. Dieci and D. Omarov

Abstract: In these five lectures, we present the main numerical methods used to solve optimal transport problems.

Lectures plan: 

Numbering below roughly corresponds to a little less than one lecture.

Physics-based and data-driven algorithms for partial differential equations, with applications. A. Quarteroni

Abstract: Effective problem solving begins with precise problem formulation, highlighting the importance of the initial problem-setting phase. Without a clearly defined problem, identifying suitable tools and techniques for resolution becomes arduous and often futile. This transition from problem setting to problem solving is pivotal within the broader framework of knowledge advancement. Despite the remarkable progress of AI tools, they remain reliant on the groundwork laid by human intelligence. Mathematicians, leveraging their adeptness in discerning patterns and relationships within data and variables, play a crucial role during this phase.

This course will introduce fundamental mathematical concepts encompassing both traditional machine learning and scientific machine learning. The latter offers an optimal platform for the harmonious fusion of problem setting and problem solving, bolstered by profound domain expertise. Throughout the course, emphasis will be placed on a reference application centered around the development of a mathematical simulator for cardiac function.

Lectures Plan:

Stochastic optimization for machine learning. S. Bellavia and M. Yousefi

Abstract: We present and discuss gradient-based optimization methods for solving minimization problems arising in machine learning applications. The methods analysed employ stochastic estimators of the  objective function's gradient. We will focus on the stochastic gradient method  and some of its widely used adaptive variants. We also give the main ingredients of machine learning models, including Neural Networks, and show the role played by automatic differentiation to compute the stochastic gradient. We finally introduce modern adaptive stochastic optimization methods, which have the potential to offer significant computational savings when training large-scale systems.

Lectures Plan:

From finite elements to virtual elements and applications. L. Beirão Da Veiga and A. Russo.

Abstract: The Virtual Element Method (VEM) is a technology introduced in 2013 for the discretization of partial differential equations that follows a similar paradigm to classical Finite Elements, but with important differences. By avoiding the explicit integration of the discrete shape functions and introducing an innovative construction of the stiffness matrixes, the VEM acquires interesting properties and advantages with respect to more standard methods. For instance, the VEM easily allows for general polygonal/polyhedral meshes, even non-conforming and with non-convex elements.

The present short course will be an introduction to VEM. In the first part we will introduce Virtual Elements taking the step from classical Finite Elements, focusing on a simple model problem. We will show the main features of the method such as explaining why the discrete space is "virtual", what are suitable degrees of freedom, and the trademark discretization approach of VEM based on projection plus stabilization. In the second part we will tackle more complex equations and show how more involved VEM spaces can be constructed and used to handle problems, for instance, in electromagnetism or fluid dynamics.