Numerical Linear Algebra
This is a graduate course for MSc students of Applied Mathematics in the Department of Mathematics at SUTECH.
Prerequisites:
Basic linear algebra, numerical analysis & MATLAB programming
Tentative list of topics:
- Preliminaries - a quick review
- the action of a few linear transformations: a geometric view
- four main subspaces (column space, row space, null space and the left nullspace)
- Strang's diagram (the most fundamental picture in linear algebra)
- fundamental theorem of linear algebra (part I)
- rank one matrices
- orthogonal vectors and matrices
- fundamental theorem of linear algebra (part II)
- matrix-vector multiplication from different viewpoints
- matrix-matrix multiplication
- BLAS (Basic Linear Algebra Subprograms), Strassen's algorithm
- different vector and matrix norms
2. The singular value decomposition (SVD)
- geometric perspective
- reduced and full SVD
- existence and uniqueness
- matrix properties via the SVD in particular connections to the four main subspaces
- fundamental theorem of linear algebra (part III and part IV)
- a quick look at applications of SVD
3. QR factorization and least squares
- projectors
- orthogonal projectors
- QR factorization
- classical and modified Gram-Schmidt orthogonalizations
- numerical loss of orthogonality
- Householder triangularization
- three algorithms for solving least squares problems: normal equations, QR factorization and SVD
- two-sided inverse, left inverse, right inverse and the generalized Moore-Penrose inverse
4. Conditioning and stability
- absolute and relative condition numbers
- condition number of certain problems including solution of linear systems (normwise, mixed, componentwise and structured condition numbers)
- IEEE standard for floating point arithmetic (machine numbers, machine epsilon, etc.)
- accuracy and stability
- backward stability
- examples of stable and unstable algorithms (e.g., Householder triangularization and outer-product computation, backward stability of back-substitution implies backward stability of Horner's rule and Clenshaw algorithm for monomial and Chebyshev expansions)
- conditioning of least squares problems
- stability of three different algorithms for solving least squares
5. Gaussian elimination
- stability
- different pivoting strategies
- growth factors, the worst-case instability: Kahan matrix
- Cholesky factorization
6. Eigenvalues
- overview of similarity transformations, spectral decomposition, defective matrices, and Jordan canonical form
- normwise and componentwise condition numbers of eigenvalues; right & left eigenvectors
- overview of unitary diagonalization, Schur forms, and block diagonalization; Sylvester equation
- every eigenvalue solver should be iterative
- reduction to Hessenberg or tridiagonal form
- Rayleigh quotient, power iteration, inverse iteration, operations count
- QR algorithm
- backward stability of the two-phase approach for symmetric eigenproblems
- Golub-Kahan bidiagonalization for computing the SVD
7. Iterative methods for solving linear systems of equations
- a unified framewrok for constructing classical iterative techniques (Richardson, Jacobi, forward and backward Gauss-Seidel, forward and backward SOR, SSOR, forward and backward AOR) based on the splitting approach, sufficient conditions for the convergence
- structure and sparsity
- projection onto Krylov subspaces
- Arnoldi algorithm for partial reduction to Hessenberg form
- Lanczos algorithm for partial reduction to symmetric tridiagonal form
- steepest descent, conjugate directions, conjugate gradients (CG) and preconditioning
- GMRES
- biorthogonalization methods
- basics of multigrid
8. Iterative methods for large eigenvalue problems
- How Arnoldi locates eigenvalues
- the Lanczos iteration
- preconditioners
Textbook:
The primary reference for the course is
- L. N. Trefethen, D. Bau, Numerical Linear Algebra, SIAM, Philadelphia, 1997.
Here, is a list of other main references:
- G. Golub, C. Van Loan, Matrix Computations, 4th ed., The Johns Hopkins University Press, 2013. Errata
- J. W. Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997. Errata
- D. S. Watkins, Fundamentals of Matrix Computations, 3rd ed., Wiley, 2010.
- B. N. Datta, Numerical Linear Algebra and Applications, 2nd ed., SIAM, Philadelphia, 2010.
and
- J. R. Shewchuk, An introduction to the conjugate gradient method without the agonizing pain, Technical report, Carnegie Mellon University, 1995. Classroom figures