09‒10 May 2024

Program

Thursday, 09 May 2024

08:30 Registration and Welcome

Registration starts at 8:30.

Chair: Karl Meerbergen

09:00 Heike Faßbender—Revisiting structure-preserving Krylov subspace methods for the Hamiltonian eigenvalue problem

Hamiltonian eigenvalue problems arise in various fields of science and engineering, playing a pivotal role in quantum mechanics, computational chemistry, and structural dynamics. These problems often involve large-scale matrices with a Hamiltonian structure, demanding specialized numerical methods for efficient and accurate solutions. Structure-preserving Krylov subspace methods have emerged as a promising approach to tackle large-scale Hamiltonian eigenvalue problems. We will review (extended) Krylov subspace methods which generate a suitable symplectic basis that can be used to transform the large-scale Hamiltonian eigenvalue problem to a small-scale one. Besides solving the Hamiltonian eigenvalue problem itself we will discuss how to use these methods in order to approximate f(H)b where f is a function which maps the Hamiltonian matrix H to, e.g., a (skew-)Hamiltonian or symplectic matrix.

09:30 Steve Mackey—An Inverse Problem for Quadratic Hermitian Matrix Polynomials

Can there be a canonical form for quadratic Hermitian matrix polynomials that is in any way analogous to the one for Hermitian pencils described by Lancaster and Rodman in SIAM Review (Sept 2005)?  A first step towards answering this question is to characterize what combinations of structural data (finite and infinite elementary divisors, minimal indices, and sign characteristic) are possible for a quadratic Hermitian polynomial, and then to show how to realize any such structural data list by a quadratic Hermitian polynomial, in as transparent (Kronecker-like) a way as possible. 

This talk describes joint work with Francoise Tisseur that addresses these issues.

10:00 Nicola Mastronardi—Fast and reliable computation  of the zeros of multiple and Sobolev orthogonal polynomials

The topic of this talk is the computation of the zeros of multiple and Sobolev orthogonal polynomials.They can be computed via the eigendecomposition of a Hessenberg matrix H, built on the coefficients of the recurrence relations associated with the considered polynomials.

 Unfortunately, the condition numbers of the eigenvalues of the Hessenberg matrix H are very large, and classical balancing techniques do not reduce them.

Here, we propose a new balancing procedure  that drastically reduces the condition of the  Hessenberg eigenvalue problem, and a fast and reliable algorithm allowing to compute the zeros of multiple and Sobolev orthogonal polynomials  in floating point arithmetic, requiring only O(n²) computational  complexity and O(n) memory.

10:30 Coffee Break

Coffee will be served in XXXX.

11:00 Thijs Steel—The QR-SVD algorithm, combining speed and accuracy

The Singular Value Decomposition (SVD) is an important tool in scientific computing, so much so that I don't think it needs an introduction for the audience of this workshop. There are several popular algorithms to calculate an SVD, each with its advantages and disadvantages. Choosing which algorithm to use is mostly a trade-off between accuracy and speed. The Jacobi iteration provably achieves high relative accuracy, but is usually the slowest, especially for large matrices. The divide and conquer (D&C) algorithm is much faster, but cannot guarantee high relative accuracy. The QR iteration sits somewhere between the Jacobi iteration and D&C. If carefully implemented, it can achieve high accuracy, and it is faster than the Jacobi iteration but slower than D&C. During this talk, we will introduce an optimized version of the QR iteration. We use pipelined small reflectors to achieve high flop rates during the sweeping stage and use Aggressive Early Deflation (AED) to speed up convergence. Together, these techniques make the QR algorithm almost as fast as D&C, while it remains almost as accurate as the Jacobi iteration.

11:30 Thomas Mach—Rotations

Rotations, unitary matrices with determinant 1, are central to many eigenvalue algorithms David has worked on in the last 15 years. In particular unitary matrices that only differ in a two-by-two block from the identity have proven efficient building blocks. 

In this talk we discuss some of the well-known and some of the less well-known facts regarding rotations and their products.

12:00 Group Photo

Before walking to lunch we will stop in front of the castle for a photo. 

12:05 Lunch at Oude Kantien

The lunch will take place in the restaurant across the street and is included in the registration fee.

Chair: Thomas Mach

14:00 Francoise Tisseur—Deflation Strategies for Nonlinear Eigenvalue Problems

(cancelled)
Deflation for eigenvalue problems is a standard technique that consists of removing a known eigenvalue or changing it so that the other eigenvalues are easier to find. In this talk we discuss and compare different strategies to deflate eigenvalues of nonlinear eigenvalue problems. We will pay particular attention to the quadratic eigenvalue problem and introduce a deflation strategy based on structure preserving transformations.

14:00 Leonardo Robol—Backward error analysis for structured polynomial root-finder

Are QR-based method "stable" for polynomial rootfinding? Our first analysis almost 10 years ago showed that the answer in general would be "it depends, but not so much". Later, with the help of a careful anonymous referee, we figured out that our algorithm was actually surprisingly stable, even more than the unstructured version; this was completely unexpected, and we spent a lot of time to figure out why. It turned out that this stability had a subtle yet very interesting relation with low-rank structures in the problem, and in the backward error. In this talk, I will tell you about the outcome of this effort.

14:30 Julien Langou—The I/O Requirements of Various Numerical Linear Algebra Kernels

When designing an algorithm, one cares about arithmetic/computational complexity, but data movement (I/O) complexity is playing an increasingly important role that highly impacts performance and energy consumption. The objective of I/O complexity analysis is to compute, for a given program, its minimal I/O requirement among all valid schedules. We consider a sequential execution model with two memories, an infinite one, and a small one of size S on which the computations retrieve and produce data. The I/O is the number of reads and writes between the two memories. From this model, we review various Numerical Linear Algebra kernels that are increasingly complicated from matrix-matrix multiplication, to LU factorization, then to symmetric rank-k update, to Cholesky factorization, then to Modified Gram-Schmidt to Householer QR factorization. We will show practical examples of these results too.

15:00 Elias Jarlebring—Computation graphs for matrix functions

Many numerical methods for evaluating matrix functions can be naturally viewed as computational graphs. Rephrasing these methods as directed acyclic graphs (DAGs) is a particularly effective approach to study existing techniques, improve them, and eventually derive new ones. The accuracy of these matrix techniques can be characterized by the accuracy of their scalar counterparts, thus designing algorithms for matrix functions can be regarded as a scalar-valued optimization problem. The derivatives needed during the optimization can be calculated automatically by exploiting the structure of the DAG, in a fashion analogous to backpropagation. This paper describes GraphMatFun.jl, a Julia package that offers the means to generate and manipulate computational graphs, optimize their coefficients, and generate Julia, MATLAB, and C code to evaluate them efficiently at a matrix argument. The software also provides tools to estimate the accuracy of a graph-based algorithm and thus obtain numerically reliable methods. For the exponential, for example, using a particular form (degree-optimal) of polynomials produces implementations that in many cases are cheaper, in terms of computational cost, than the Padé-based techniques typically used in mathematical software. The optimized graphs and the corresponding generated code are available online. 

15:30 Coffee Break

Coffee will be served in XXXX.

16:00 Jelena Tomanović—Linear systems in Gauss-type quadratures for variable-sign weight functions

We consider a recently proposed Gauss-type quadrature formula with respect to a weight function that changes sign in the interior of the integration interval. An important step in its construction is to introduce a modifier function used to transform the given integral into a sum of one integral that does not cause a quadrature error and the other integral with a property that the points from the interior of the integration interval at which the weight function changes sign are the zeros of its integrand. Determining a modifier function requires solving an associated system of linear equations. For the same integral, different modifier functions can be chosen, and hence different linear systems can be obtained. From a theoretical perspective, only necessary is that the associated system has a solution, but from a computational perspective, it is also important that the associated system is not too ill-conditioned and that the structure of its matrix is as simple as possible. We analyze the conditions under which it is guaranteed to obtain, for instance, a system with a Vandermonde matrix or a system with an identity matrix. We also give examples where systems with an arbitrary matrix are obtained, both those that have and those that do not have a solution.

16:30 Lothar Reichel—A tensor bidiagonalization method for higher-order singular value decomposition with applications 

The need to know a few singular triplets associated with the largest singular values of a third-order tensor arises in data compression and extraction. We describe a new method for their computation using the t-product. Methods for determining a couple of singular triplets associated with the smallest singular values also are presented. The proposed methods generalize available restarted Lanczos bidiagonalization methods for computing a few of the largest or smallest singular triplets of a matrix. The methods use Ritz and harmonic Ritz lateral slices to determine accurate approximations of the largest and smallest singular triplets, respectively. Computed examples show applications to data compression and face recognition.

17:00 David S. Watkins—Some Recollections

18:00 Reception

The reception will be held in the Machinezaal (engine room) of the Thermotechnisch Instituut (Kasteelpark Arenberg 41, 3001 Leuven) just opposite of the castle.

Friday, 10 May 2024

Chair: David S. Watkins

09:00 Nick Trefethen—Eigenvalues and condition numbers of random quasimatrices

Every friend of David's likes eigenvalues, and these days, we're all interested in randomness.  Here we explore what happens to eigenvalue phenomena for random matrices when, in one or both directions, the discrete random entries are replaced by smooth random functions.

09:30 Ilse Ipsen—Randomized matrix algorithms involving orthogonalization

We investigate the potential of randomization as a viable alternative in the context of orthogonalization, including the computation of orthonormal bases, matrix decompositions,  low-rank approximations, and the solution of linear systems and of least squares/regression problems.

We consider various sampling and sketching strategies; evaluate matrix concentration inequalities and structural bounds for estimating the error due to randomization; examine numerical stability issues; present empirical results; and identify problem classes where randomization delivers sufficient accuracy and speedup.

10:00 Daniel Szyld—Convergence of randomized and greedy relaxation schemes for solving nonsingular linear systems of equations

We extend results known for the randomized Gauss-Seidel and the Gauss-Southwell methods for the case of a Hermitian and positive definite matrix to certain classes of non-Hermitian matrices. We obtain convergence results for a whole range of parameters describing the probabilities in the randomized method or the greedy choice strategy in the Gauss-Southwell-type methods. We identify those choices which make our convergence bounds best possible. Our main tool is to use weighted $\ell_1$-norms to measure the residuals. A major result is that the best convergence bounds that we obtain for the  expected values in the randomized algorithm are as good as the best for the deterministic,  but more costly algorithms of Gauss-Southwell type.

10:30 Coffee Break

Coffee will be served in XXXX.

11:00 Froilán Dopico—Strongly minimal self-conjugate linearizations for polynomial and rational matrices

We present the concept of strongly minimal linearization and prove that we can always construct strongly minimal linearizations of an arbitrary rational matrix from its Laurent expansion around the point at infinity, which happens to be the case for polynomial matrices expressed in the monomial basis. If the rational matrix has a particular self-conjugate structure we show how to construct strongly minimal linearizations that preserve it. The structures that are considered are the Hermitian and skew-Hermitian rational matrices with respect to the real line, and the para-Hermitian and para-skew-Hermitian matrices with respect to the imaginary axis. We pay special attention to the construction of strongly minimal linearizations for the particular case of structured polynomial matrices. The proposed constructions lead to efficient numerical algorithms for constructing strongly minimal linearizations. The fact that they are valid for any rational matrix is an improvement on any other previous approach for constructing other classes of structure preserving linearizations, which are not valid for any structured rational or polynomial matrix. The use of the recent concept of strongly minimal linearization is the key for getting such generality. 

This is joint work with María C. Quintana (Aalto University, Finland) and Paul Van Dooren (Université Catholique de Louvain, Belgium).

11:30 Rob Corless—Surprising Companions

I expect that everyone at this conference well knows what a companion matrix is.  I hope that this talk, largely based on my ELA paper with Eunice Chan entitled "A New Kind of Companion Matrix" will show you some new things about them.  But then, I have talked about these before, at an ILAS meeting and at FDM60; so maybe that part won't be so new.  Still, there might be some surprises lurking.

12:00 Sandwich Lunch

The sandwich (included in the registration fee) will be served in the foyer of the computer science department.

Chair: Roel Van Beeumen

14:00 Karl Meerbergen—Implicit restarting of Krylov and rational Krylov methods

This talk is a combination of results over many years. I'll show my insight in implicit restarting of Krylov methods. This includes the choice of shifts, a connection with contour integration and subspace iteration. Lyapunov equations also show up.

14:30 Eric de Sturler—A Warm Start Krylov Schur Algorithm

Sequences of eigenvalue problems arise in a range of applications, among others in computing the eigenvectors for parameterized PDEs, for example, in acoustics and model reduction, updating an approximate invariant subspace for Krylov subspace recycling, and for data analytics problems. 

Rather than starting from scratch, it may be advantageous to jump-start the Krylov-Schur algorithm with the approximate invariant subspace  of a previous matrix. While this may reduce the total number of matrix-vector products, it also brings some complications. The warm-start space is generally not a Krylov space for the new matrix, and we need to make adaptations to the Krylov Schur algorithm. We compute the approximate Krylov decomposition with the smallest backward error (residual matrix), following [Stewart 2002], followed by a sequence of Arnoldi iterations and a subsequent Krylov-Schur truncation that takes the residual matrix into account. We analyze how a cycle of Arnoldi/Krylov-Schur reduces the norm of the residual and how this relates to the convergence to an approximate invariant subspace as for the standard Krylov-Schur algorithm.

15:00 Jörg Liesen—Spectral properties of certain nonsymmetric saddle point matrices

Symmetric indefinite saddle point matrices of the form $[A, B^T; B, -C]$ appear in numerous applications. If we multiply the second block row of such a matrix by $-1$, we obtain a nonsymmetric positive (semi)definite matrix ${\cal A}=[A, B^T; -B, C]$. This nonsymmetric variant of a saddle point matrix splits ``naturally'' into its positive (semi)definite symmetric and skew-symmetric parts, a structure that has recently attracted a lot of attention in the context of dissipative Hamiltonian DAE systems. Several authors have studied conditions on ${\cal A}$ so that this matrix is diagonalizable with real and positive eigenvalues. In this case a conjugate gradient method for ${\cal A}$ in a nonstandard inner product exists, which due to the positive definiteness may converge faster than a Krylov subspace method applied to the original symmetric indefinite saddle point matrix. In this talk we will present a general analysis of the spectral properties of ${\cal A}$, and we will explain when and how an inner product in which ${\cal A}$ is selfadjoint can be constructed.
This is joint work with Justuts Ramme (TU Berlin). 

15:30 Coffee Break

Coffee will be served in XXXX.

16:00 Daan Camps—On Block Encodings of Matrices

In recent years, a multitude of quantum linear algebra algorithms have been proposed that offer theoretical advantages over classical alternatives for certain computational tasks. Coherent query access to the data matrix of interest is central to many of these algorithms and is often assumed to be available in the form of a black box oracle which can be used to generate a block encoding of the data matrix. A block encoding is an embedding of a (possibly non-unitary) data matrix into a larger unitary matrix. In this talk, I will present an overview of recent results on grey and white box quantum circuit implementations for block encodings of structured and unstructured data matrices. This forms an important intermediate step to accurately evaluate the resources required for solving linear algebra problems on quantum computers.

16:30 Shmuel Friedland—SDP characterizations of the numerical radius, its dual norm and the Crawford number

We will discuss some known and  recent results on semi-definite programming (SDP) characterizations of  the numerical radius, its dual norm and the Crawford number for complex matrices.  Generalization to SDP characterizations of the numerical radius and its dual norm for quaternionic matrices will be given.  We will show that the Turing-bit complexity of these quantities,  and the spectral and nuclear norm of real 2 x m x n tensors  is polynomial.

17:00 Fernando De Terán—The influence of David S. Watkins in my teaching and research

I will review how the work by David S. Watkins has influenced my teaching and research activity.

19:00 Conference Banquet

The conference banquet will be held at the Restaurant Voltaire (Jules Vandenbemptlaan 6, 3001 Leuven, Belgien)