Numerical Analysis
Workshop

All talks in Room G.205 - Alan Turing Building

Monday 17th June - Invited Talks

15:30 Jonas Latz
University of Manchester
Data-driven approximation of Koopman operators and generators: Convergence rates and error bounds
Global information about dynamical systems can be extracted by analysing associated infinite-dimensional transfer operators, such as Perron-Frobenius and Koopman operators as well as their infinitesimal generators. In practice, these operators typically need to be approximated from data. Popular approximation methods are extended dynamic mode decomposition (EDMD) and generator extended mode decomposition (gEDMD). We propose a unified framework that leverages Monte Carlo sampling to approximate the operator of interest on a finite-dimensional space spanned by a set of basis functions. Our framework contains EDMD and gEDMD as special cases, but can also be used to approximate more general operators. Our key contributions are proofs of the convergence of the approximating operator and its spectrum under non-restrictive conditions. Moreover, we derive explicit convergence rates and account for the presence of noise in the observations. Whilst all these results are broadly applicable, they also refine previous analyses of EDMD and gEDMD. We verify the analytical results with the aid of several numerical experiments.

Joint work with Liam Llamazares-Elias, Samir Llamazares-Elias, and Stefan Klus. Corresponding preprint: https://arxiv.org/abs/2405.00539

16:10 Francesca Arrigo
University of Strathclyde
f(a)bulous networks: an overview of matrix functions in network science
Though seemingly they belong to two different worlds, matrix functions and network science have some degree of overlap thanks to a very simple fact; powers of the adjacency matrix count traversals in the underlying network. This concept in turn allows for the definition of centrality measures in terms of entries (or sums thereof) of functions of the adjacency matrix. In this talk, after reviewing basic definitions, we will give an overview of popular walk-based centrality measures in networks, emphasizing the role of matrix functions and of expressions of the form f(A)b and cᵀf(A)b. We will further discuss nonbacktracking walk-based centralities and describe challenges and open problems.

16:50 Yuji Nakatsukasa
University of Oxford
SuperPCA: subspace analysis and efficient algorithm for high-dimensional PCA
The spiked covariance model is a classical statistcal model that studies the empirical distribution of observed signals that come with noise. We consider the situation where multiple signals are present, and we are interested in detecting one or more of the leading signals. Using perturbation theory for singular vectors we derive bounds (both deterministic and probabilistic) for the angle between the desired subspace and the subspace obtained from the measurement matrix. Our main theoretical finding is that the subspace spanned by the leading eigenvectors of the sample covariance matrix contains significant information about the desired signals, long before the individual eigenvectors start converging to the signals, i.e., the population eigenvectors. Our main algorithmic contribution is the introduction of an algorithm SuperPCA (Subspace sub-sampler PCA), which capitalizes on the theory to find the leading signals, sometimes far more efficiently and accurately than classical PCA in the high-dimensional, multi-signal setting.

Tuesday 18th June - Invited Talks

11:30 Qingna Li
Beijing Institute of Technology
Fast optimization algorithms for machine learning models
Efficient algorithms have been the key point in machine learning. Machine learning models are usually large-scale, nonsmooth and nondifferentiable, which brings challenges to traditional optimization methods to be applied. In other words, machine learning has set the bar very high for the whole field of optimization with regards to the development of numerical methods and the associated convergence analysis theory, as well as the introduction of efficient tools to speed up components such as derivative calculations among other things. In this talk, we will discuss some recent progress in fast optimization algorithms for some specific machine learning models.

13:45 Sheehan Olver
Imperial College London
Decomposing Representations of the Symmetric Group, Numerically
Orthogonal representations of a group are maps to orthogonal matrices so that matrix multiplication is consistent with group multiplication. For the symmetric group there are a finite number of irreducible representations and other representations can be block-diagonalised into irreducible representations, a process akin to computing the eigenvalue decomposition. In this talk we introduce algorithms for computing this decomposition based on classical numerical linear algebra, whereas existing algorithms are based on group theoretical aspects. By combining these algorithms with the new approach to representation theory for the symmetric group introduced by Okounkov and Vershik in the 90s we can decompose a dimension d representation of the symmetric group Sₙ in polynomial time, in particular O(n² d³) + O(n d⁴) operations.

Applications to sparsification and parallelisation of discretisations of partial differential equations arising from spectral methods and connections to random matrix theory will also be discussed.

14:25 Matt Colbrook
University of Cambridge
Data-Driven Spectral Measures and Generalized Eigenfunctions of Koopman Operators
We introduce the Rigged Dynamic Mode Decomposition (Rigged DMD) algorithm, which computes generalized eigenfunction decompositions of Koopman operators. By considering the evolution of observables, Koopman operators transform complex nonlinear dynamics into a linear framework suitable for spectral analysis. While powerful, traditional Dynamic Mode Decomposition (DMD) techniques often struggle with continuous spectra. Rigged DMD addresses these challenges with a data-driven methodology that approximates the Koopman operator's resolvent and its generalized eigenfunctions using snapshot data from the system's evolution. At its core, Rigged DMD builds wave-packet approximations for generalized Koopman eigenfunctions and modes by integrating Measure-Preserving Extended Dynamic Mode Decomposition with high-order kernels for smoothing. This provides a robust decomposition encompassing both discrete and continuous spectral elements. We derive explicit high-order convergence theorems for generalized eigenfunctions and spectral measures. Additionally, we propose a novel framework for constructing rigged Hilbert spaces using time-delay embedding, significantly extending the algorithm's applicability. We provide examples, including systems with a Lebesgue spectrum, integrable Hamiltonian systems, the Lorenz system, and a high-Reynolds number lid-driven flow in a two-dimensional square cavity, demonstrating Rigged DMD's convergence, efficiency, and versatility. This work paves the way for future research and applications of decompositions with continuous spectra. This talk is based on joint work with Catherine Drysdale (University of Birmingham) and Andrew Horning (MIT).

Wednesday 19th June - Contributed Talks

11:30 Alban Bloor Riley
University of Manchester
Deflation Techniques for Finding Multiple Local Minima of a Nonlinear Least Squares Problem

11:50 Carlos Felipe Jerez Hanckes
Universidad Adolfo Ibáñez and Visiting Prof at Bath
Finite-Element Domain Approximation For Maxwell Variational Problems On Curved Domains

12:10 Zhengbo Zhou
University of Manchester
Computing Accurate Eigenvalues using the Preconditioned Jacobi Algorithm

14:00 Marcus Webb
University of Manchester
Approximation of Wave Packets on the Real Line

14:20 Sabia Asghar
University of Hasselt
A numerical study of point forces in a multi-dimensional elastic model for tumors

14:40 Ritesh Singla
Indian Institute of Technology
Pointwise adaptive non-conforming finite element method for the obstacle problem

Organisers:

Françoise Tisseur
Marcus Webb