GAMM Workshop Applied and Numerical Linear Algebra 2020

The 20th Workshop of the GAMM Activity Group on Applied and Numerical Linear Algebra will take place online via zoom in a reduced format from 24th-25th September 2020. There will be plenary talks by

Daniela Calvetti (Case Western University, USA)

Serge Gratton (ENSEEIHT Toulouse, France)

Catherine Powell (University of Manchester, UK)

and a social session. Please register here to get the zoom link for the meeting.

Preliminary Schedule: (all times are CEST)

24th September

16:00-16:45 Serge Gratton (University of Toulouse, IRIT, ANITI) PDF of the slides

16:45-18:00 Discussion and Drinks (BYOB)

25th September

13:00 - 13:45 Catherine Powell (University of Manchester, UK) PDF of the slides

13:45 - 14:30 Daniela Calvetti (Case Western University, USA) PDF of the slides

We look forward to welcoming you to Potsdam in person in 2021.

Abstracts

Daniela Calvetti: Bayesian Linear Algebra for Sparse Solutions

Abstract:

The accurate recovery of sparse signals from few noisy data has drawn a lot of attention in the last several years, in part motivated by remote sensing and dictionary learning applications where the relation between the signal and the data is linear. By recasting the linear algebra problem within a Bayesian framework, the preference for a sparse solution can be encoded as a right preconditioner which is adaptively updated as part of an inner-outer iteration scheme. The probabilistic interpretation provides a natural justification of the sensitivity weighting used, e.g., in geophysics and medical imaging, generalizing the idea to problems where the weighting is less obvious due to the lack of underlying physical interpretation. For underdetermined problems, non-orthogonal projections defined through the prior covariance automatically enrich the solution with contributions from the null space. Moreover, Krylov subspace methods perform an automatic model reduction and detection of the effective dimensionality of the problem. By choosing the underlying probabilistic model, it is possible to balance the promotion of sparsity with the convexity of the underlying objective function while retaining computational efficiency in the solution scheme.

[1] D Calvetti, F Pitolli, E Somersalo, B Vantaggi (2018): Bayes meets Krylov: Statistically inspired preconditioners for CGLS, SIAM Review 60 (2), 429-461

[2] D Calvetti, E Somersalo, A Strang (2020): Hierachical Bayesian models and sparsity: ℓ 2-magic. Inverse Problems 35 (3), 035003

[3] D Calvetti, M Pragliola, E Somersalo, A Strang (2020), Sparse reconstructions from few noisy data: analysis of hierarchical Bayesian models with generalized gamma hyperpriors, Inverse Problems 36 (2), 025010

[4] D Calvetti, M Pragliola, E Somersalo (2020): Hybrid solver for hierarchical Bayesian inverse problems, arXiv preprint arXiv:2003.06532

Catherine Powell: Parameter-Robust Preconditioning for Stochastic Galerkin Mixed Finite Element Systems

Abstract:

Stochastic Galerkin (SG) approximation is a popular approach for performing forward uncertainty quantification (UQ) in PDE models with uncertain inputs. Unlike conventional sampling methods, SG schemes yield approximations which are functions (usually, polynomials) of the random input variables so that all realisations of the PDE solution are effectively approximated simultaneously. Since they use simple tensor product approximation spaces, standard SG schemes give rise to huge linear systems with coefficient matrices with a characteristic Kronecker product structure. The number of equations can easily run into the hundreds of millions, even for relatively simple physical models, meaning that the associated coefficient matrices cannot be assembled and stored when working on standard desktop computers. Tailored linear algebra tools are therefore required.

If sparse matrix-vector products can be performed efficiently and sufficient memory is available to store vectors of the appropriate length, then standard preconditioned Krylov methods can still be used. In this case, the key challenge is to construct suitable preconditioners that are cheap to implement and robust not only with respect to the SG discretisation parameters and any important physical model parameters, but also with respect to the number of random inputs and their statistical properties. In this talk, we consider mixed formulations of linear elasticity and poroelasticity problems with uncertain Young modulus and hydraulic conductivity field [1], [2], and discuss how the now well-known operator approach to preconditioning [3] can be applied in the stochastic Galerkin setting. In this approach, the preconditioner is chosen to provide a discrete representation of a weighted norm with respect to which the weak formulation of the mixed problem is provably stable. Armed with the right norms for the specific applications considered, we are able to design preconditioners that are in particular robust in the incompressible limit.

[1] A. Khan, C.E. Powell, and D. J. Silvester. Robust preconditioning for stochastic Galerkin formulations of parameter-dependent nearly incompressible elasticity equations. SIAM Journal on Scientific Computing, 41(1):A402–A421, 2019.

[2] A. Khan, C.E. Powell. Parameter-Robust Stochastic Galerkin Mixed Approximation for Linear Poroelasticity with Uncertain Inputs. arXiv: 2003.06628

[3] K.A. Mardal and R. Winther. Preconditioning discretizations of systems of partial differential equations. Numerical Linear Algebra with Applications, 18(1):1–40, 2011


Serge Gratton: Least squares, Data Assimilation and Machine Learning under Physical Constraints

Abstract:

In Data assimilation, the state of a system is estimated using two models that are the observational model, which relates the state to physical observations, and the dynamical model, that is used to propagate the system along the time dimension. Both models are described using random variables that account for observation and model errors. Most data assimilation basic equations are based on a Bayesian approach in which prior information is combined with above statistical models to obtain the probability density of the system state, conditioned to past observations. The estimation is done in two steps: the analysis of an incoming observation and the propagation to the next time step.

Data assimilation algorithms use additional assumptions to obtain closed expressions for the probability densities so that they can be handled by computers. Historically in the linear Kalman Filter (KF) approach, statistical models are supposed to be Gaussian and models linear. Hence, the propagation and analysis steps consist in updating mean and covariance matrix characterizing the Gaussian densities of the state conditioned to observations, using least-squares techniques. In the Ensemble Kalman Filter (EnKF) approach, these densities are approximated by a (hopefully small) set of sampling vectors, and formula are available for the analysis and the propagation steps as well.

The above presentation of KF and EnKF suggests a possible generalization of data assimilation algorithms. The bottom line of this approach is to let a learning process decide what is the best internal representation for the densities. This representation will be stored in some computer memory M (say, in a hopefully not too long vector whenever model reduction is targeted) and we propose to use machine learning to estimate the analysis and propagation steps acting on it, using recurrent network based learning architectures, hence the name Data Assimilation Network (DAN). The data used for the learning consists in batches of state trajectories of the (stochastic) dynamical system and corresponding (noisy) observations. We mathematically prove the optimality of this general algorithm, that is a variant of the Elman Neural network, outperforms state of the art ensemble variational techniques in the case of twin experiments with the Lorenz system.

Joint work with A. Fillion (ANITI), S. Gurol (Cerfacs), P. Boudier (Nvidia)

Local Organisation, Contact and Sponsor

The 20th Workshop of the GAMM Activity Group on Applied and Numerical Linear Algebra will take place at the University of Potsdam from 24th-25th September.

This year, special emphasis will be given to Numerical Linear Algebra in Data Assimilation and Computational Inverse Problems, but all other areas of applied and numerical linear algebra are welcome. We look forward to welcoming all interested scientists to Potsdam.

There is no registration fee.

Important note: In view of the current COVID-19 situation, and government advice, it is possible (or even very likely) that the workshop takes place virtually via video conferencing. We will continue to monitor the situation, please check this website regularly and, if you do book hotels or travel, make sure you book refundable options.

Invited Speakers

Daniela Calvetti (Case Western University, USA)

Serge Gratton (ENSEEIHT Toulouse, France)

Catherine Powell (University of Manchester, UK)

Date and Location and Schedule

  • Date: September 24th-25th, 2020 (Thu-Fri), starting approximately 9:00 am on Thursday and finishing Friday around 3:00 pm.

  • Location: University of Potsdam, Griebnitzsee Campus (or online)

  • Schedule: Presentations and conference dinner on Thursday, presentations on Friday

Scientific Topics and Contributions

The workshop is devoted to all aspects applied and numerical linear algebra. The topics include but are not limited to: eigenvalue problems, matrix functions, linear systems, least squares problems, tensor methods, matrix equations, high performance computing

Contributed talks should be 20 minutes (plus 5 minutes discussion). Laptop and data projector (and also boards) are available.

Registration and Important Dates

Abstracts and program will be made available both online and as a booklet handed out during the workshop.