(all times in Pacific Daylight Time)
In this talk I will present different ways to study model complexity based on partial differential equations (PDE). We show how this understanding can help developing fast algorithms and exploiting the underlying low dimensional structures.
Spectral methods for solving differential eigenproblems usually follow the ``discretize-then-solve'' paradigm. Discretize first, and then solve the matrix eigenproblem. The discretize-then-solve paradigm can be tricky for differential eigenproblems as the spectrum of matrix discretizations may not converge to the spectrum of the differential operator. Moreover, it is impossible to fully capture the continuous part of the spectrum with a finite-sized matrix eigenproblem. In this talk, we will discuss an alternative ``solve-then-discretize'' paradigm for differential eigenproblems. To compute the discrete spectrum, we will discuss a continuous analogue of FEAST by approximating the action of the resolvent operator. For the continuous spectra, we will use a Cauchy-like integral to calculate a smoothed version of the so-called spectral measure. This is joint work with Matthew Colbrook and Andrew Horning.
The depth-averaged shallow water wave equations are commonly used to model tsunamis, overland flooding, debris flows, storm surges and so on. Generally, depth averaged flow models show excellent large scale agreement with observations and can thus be reliably used to predict whether tsunamis will reach distant coast lines, and can provide vital information about arrival times. However, for other types of flows, dispersive effects missing from the SWE model can play an important role in predicting whether waves will overtop seawalls, or whether a landslide entering a lake will trigger tsunami-like behavior on the opposite shore. To model these dispersive affects, several models include dispersive corrections to the SWE. One such model is the Serre-Green-Naghdi equations.
We will present our work on solving the Serre-Green-Naghdi (SGN) equations on adaptively refined Cartesian meshes. A common approach to treating the higher order derivatives that appear in the SGN equations is to treat these terms implicitly, resulting in an elliptic-like operator for the dispersive terms. As a result, a key component of an SGN solver is a variable coefficient elliptic solver. We will discuss our current work in developing an elliptic solver for adaptively refined meshes generated by ForestClaw (www.forestclaw.org). The solver is based on multi-grid preconditioned BiCG-STAB and implemented in the code ThunderEgg (Scott Aiton, Boise State University). A key feature of the ThunderEgg solver is that it can take advantage of fast solvers, when available, to solve local elliptic problems on the Cartesian patches in the adaptive mesh hierarchy. We will show results using the solver for challenging benchmark problems, as well as preliminary results on solving the model SGN equations on adaptive meshes.
Skeletal muscle are living tissues that undergo large deformation due to internal or external forces. Muscle fibres run from one point to another within the whole muscle; such fibres can be actively or passively activated to produce force in order to perform different tasks. In this talk we discuss the challenges and directions we have taken towards the modelling of skeletal muscles. In particular, we discuss a fully dynamic and nonlinear system of PDEs to describe the large deformation of these tissues, the numerical scheme utilized to approximate these deformations, and describe some applications to biomechanics and physiology.
Vulcan Climate Modeling (a small philanthropically-supported Seattle group that I co-lead) and a NOAA climate modeling laboratory in Princeton NJ are collaborating on a pilot project to use machine learning to improve the weather forecasting and climate skill of the US global weather forecast model. We run this model, FV3-GFS, with horizontal grid spacings of 25-200 km that are coarse enough to allow computationally efficient climate simulations of many decades or centuries. Our general strategy is to retain the model's standard flow solver and to use machine learning to bias-correct the model 'physics' (representing clouds, rain, turbulence, surface exchange, radiation, etc.) to follow the evolution of a reference 'truth' model. That reference model can either be taken as the observed time-varying atmospheric state (for weather forecasting), or as a coarse-graining of a nominally more accurate fine-grid version of the same global model (for climate simulation). Sophisticated software engineering has allowed us to marry a highly complex FORTRAN code with Python machine-learning tools on Google Cloud. We have achieved an initial goal of using this type of machine learning to improve 5-10 day global weather forecasts, and we are progressing toward simulations with stable, realistic climates.
Concurrent multiscale methods are essential for the understanding and prediction of behavior of engineering systems when a small-scale event will eventually determine the performance of the entire system. This talk will describe the recently-proposed [1,2] domain-decomposition-based Schwarz alternating method as a means for concurrent multiscale coupling in finite deformation quasistatic and dynamic solid mechanics. The approach is based on the simple idea that if the solution to a partial differential equation is known in two or more regularly shaped domains comprising a more complex domain, these local solutions can be used to iteratively build a solution for the more complex domain. The proposed approach has a number of advantages over competing multiscale coupling methods, most notably its concurrent nature, its ability to couple non-conformal meshes with different element topologies, its non-intrusive implementation into existing codes, and, for the dynamic case, its ability to couple subdomains that each use their own time-step or even their own time integrator.
Following an overview of our formulation of the Schwarz alternating method and its convergence properties, we will describe the method’s implementation within two codes (Albany LCM [3], Sierra/SM[4]) and demonstrate the method’s accuracy, convergence and scalability on a number of numerical examples, including a realistic scenario involving the simulation of a bolted joint subjected to dynamic loading. These examples demonstrate that the method converges to the correct solution and is free of numerical artifacts (e.g., spurious reflection or refraction of waves) for a wide range of quasistatic and dynamic solid mechanics problems. Additionally, our results show that, despite its iterative nature, the Schwarz alternating method can actually lead to a reduction in the computational time relative to a single-domain simulation having a comparable resolution.
REFERENCES
[1] A. Mota, I. Tezaur, C. Alleman. “The alternating Schwarz method for concurrent multiscale coupling”, Comput. Meth. Appl. Mech. Engng. 319 (2017) 19-51.
[2] A. Mota, I. Tezaur, G. Phlipot. "The Schwarz alternating method for dynamic solid mechanics", Comput. Meth. Appl. Mech. Engng. (under review).
[3] A. Salinger, R. Bartlett, A. Bradley, Q. Chen, I. Demeshko, X. Gao, G. Hansen, A. Mota, R. Muller, E. Nielsen, J. Ostien, R. Pawlowski, M. Perego, E. Phipps, W. Sun, I. Tezaur. "Albany: Using Agile Components to Develop a Flexible, Generic Multiphysics Analysis Code", Int. J. Multiscale Comput. Engng 14(4) (2016) 415-438.
[4] SIERRA Solid Mechanics Team. Sierra/SolidMechanics 4.48 User’s Guide. Tech. rep. SAND2018-2961. Sandia National Laboratories Report, Oct. 2018.
For coastal regions on the margin of a subduction zone, near-field megathrust earthquakes are the source of the most extreme tsunami hazards, and are important to handle properly as one aspect of any Probabilistic Tsunami Hazard Assessment (PTHA). Typically, great variability in inundation depth at a coastal location is possible due to the extreme variation in extent and pattern of slip over the fault surface. We use a Karhunen-Loève expansion to express the probability density function for all possible events, with parameters that are geophysically reasonable for the Cascadia Subduction Zone that runs from Northern California to British Columbia. We use importance sampling techniques to adequately sample the tails of the distribution and properly re-weight the probability assigned to the resulting realizations. We then use coarse-grid simulation results to group the realizations into a small number of clusters that we believe will give similar inundation patterns in the region of interest, so that only one fine-grid tsunami simulation needs to be computed from a representative member of each cluster. These can be combined with the coarse-grid simulations for other members of the cluster to more cheaply obtain results very similar to those found using all fine-grid simulations. This talk is based on joint work that is more fully described in https://eartharxiv.org/yreqw/.