211days since
BASCD 2013

List of Speakers

Lex Kemper (LBNL) : Numerical modeling of non-equilibrium phenomena and spectroscopy
The field of pump-probe, or non-equilibrium, spectroscopy has grown rapidly over the past years with the advent of new capabilities such as time-resolved photoemission spectroscopy and terahertz laser sources. In addition, the leading light sources around the world are developing free electron laser sources, which illuminate materials with short pulses of either electrons or light. These developments have led to many interesting experiments ranging from ultrafast emission of radiation from thin metal films to pump-induced superconductivity.  Theory, however, has until recently lagged behind. We have developed a fully self-consistent algorithm to simulate many-body physical systems out of equilibrium, with the capability of treating complex problems such as superconductors. In addition to previously available equal-time quantities such as time-dependent densities we can simulate the state of the art spectroscopies being used in the field.


François-Henry Rouet (LBNL): Using Random Butterfly Transformations in a Sparse Direct Solver
We consider the solution of linear systems using direct methods, more specifically the LU factorization. Unless the matrix is positive definite, the factorization usually needs to use pivoting to ensure numerical stability. Randomized Butterfly Transformations are a preconditioning-type technique that transforms the original matrix into a matrix that can be factored without pivoting with probably 1. This approach has been successful for dense matrices; in this work, we investigate the sparse case. In particular, we address the issue of fill-in in the transformed system.


Erin Carson (UC Berkeley): Communication-Avoiding Krylov Subspace Methods in Finite Precision
Krylov subspace methods (KSMs) are a class of iterative algorithms commonly used for solving eigenvalues problems and linear systems with large and sparse A. In classical KSM implementations, each iteration consists of one or more sparse matrix-vector multiplications and inner products. On modern computer architectures, these operations are both communication bound: Movement of data, rather than computation, is the limiting factor in performance. Recent efforts have thus focused on communication-avoiding KSMs (CA-KSMs), based on s-step KSM formulations. Under certain assumptions on matrix size and structure, CA-KSMs reduce the communication cost of a fixed number of iterations by O(s)
Although CA-KSMs and their classical counterparts are equivalent in exact arithmetic, their finite precision behavior can differ significantly. Accumulation of roundoff error can cause CA-KSMs to both be less accurate and converge at slower rate than the corresponding KSM, potentially negating performance benefits of the communication-avoiding approach. We present ongoing work in the design of practical techniques to alleviate these problems, including residual replacement, reorthogonalization, and polynomial basis selection. Such approaches can improve accuracy and convergence rate in CA-KSMs while still achieving the desired asymptotic reduction in communication.


Cindy Rubio-Gonzalez (UC Berkeley): Precimonious: Tuning Assistant for Floating-Point Precision
Given the variety of numerical errors that can occur, floating-point programs are difficult to write, test and debug. One common practice employed by developers without an advanced background in numerical analysis is using the highest available precision. While more robust, this can degrade program performance significantly. In this talk I will present Precimonious, our dynamic program analysis tool to assist developers in tuning the precision of floating-point programs. Precimonious performs a search on the types of the floating-point program variables trying to lower their precision subject to accuracy constraints and performance goals. Our tool recommends a type instantiation that uses lower precision while producing an accurate enough answer without causing exceptions. We have evaluated Precimonious on several widely used functions from the GNU Scientific Library, the NAS Parallel Benchmarks, and other numerical programs. For most of the programs analyzed, Precimonious reduces precision, which results in performance improvements as high as 41%. 


Christian Linder (Stanford University): New three dimensional finite elements to model solids at failure
New finite elements with embedded strong discontinuities to model failure in three dimensional purely mechanical and electromechanical coupled materials will be shown. Following the strong discontinuity approach for plane problems, the boundary value problems are decomposed into continuous global and discontinuous local parts where strong discontinuities in the displacement field and the electric potential are introduced. Those are incorporated into general three-dimensional brick finite elements through the incorporation of nine mechanical separation modes and three new electrical separation modes. All the local enhanced parameters related to those modes can be statically condensed out on the element level, yielding a computationally efficient framework to model failure in purely mechanical and electromechanical coupled materials. A marching cubes based crack propagation concept is used to obtain smooth failure surfaces in the three dimensional problems of interest. Several representative numerical simulations are included and compared with experimental results of solids at failure.


Ali Mani (Stanford University): Complex effects in micro-scale and nano-scale electrokinetics: significance of computational modeling and overcoming its challenges
Electrokinetic phenomena, e.g. in electrochemical or microfluidic systems, are described by the Poisson-Nernst-Planck and Navier-Stokes equations. However, the mainstream models describing solutions to these PDEs are often based on the method of matched asymptotic expansions and use of simplifying assumptions such as flow steadiness and thin charged boundary layers. In this presentation we will demonstrate the need for the development of specialized algorithms for simulation of electrokinetic phenomena, similar to the tools that have been traditionally used for simulation of turbulent flows. As a model problem, we consider ion transport in an aqueous electrolyte near an ion-selective membrane with applications electrochemistry, and lab-on-a-chip systems. We will show that direct numerical simulation, without asymptotic simplifications, reveals a chaotic dynamics with multi-scale flow features consistent with recent experimental observations. These calculations require resolving a wide range of spatio-temporal scales and often need massively parallel computational resources. We will discuss how development of high-fidelity tools can lead to fundamental understanding of complex effects in electrokinetic systems and facilitate their design and optimization.


Jeff Irion (UC Davis): Hierarchical Graph Laplacian Eigen Transforms
We describe a new transform that generates a dictionary of bases for handling data on a graph by combining recursive partitioning of the graph and the Laplacian eigenvectors of each subgraph. Similar to the wavelet packet and local cosine dictionaries for regularly sampled signals, this dictionary of bases on the graph allows one to select an orthonormal basis that is most suitable to one's task at hand using a best-basis type algorithm. We also describe a few related transforms, each of which may be useful in its own right.


Ding Lu (UC Davis): Numerical Solution of the Quadratic Eigenvalue Problem with Low-Rank Damping
The low-rank damping term is common in the quadratic eigenvalue problem arising from real physical simulations. To exploit the low-rank property, we propose a Pade Approximate Linearization (PAL) technique. The PAL technique leads to a linear eigenvalue problem of dimension n+r*m,  which is substantially smaller than the dimension 2n of the linear eigenvalue problem derived by a standard linearization scheme, where n is the dimension of the QEP, r and m are the rank of the damping matrix and the order of Pade approximation, respectively. In addition, we propose a scaling strategy to minimize the backward error of the PAL technique.  Several numerical examples will be presented to show the efficiency of the new approach.


Kevin Carlberg (Sandia): The GNAT method for nonlinear model reduction
Time-critical applications for systems governed by dynamical systems---such as control, fast-turnaround design, and uncertainty quantification---often demand the accuracy provided by large-scale computational models, but cannot afford their computational cost. To mitigate this bottleneck, researchers have developed model-reduction techniques that decrease the dimension of the dynamical system while preserving its key features. Such methods are effective when applied to specialized problems such as linear time-invariant systems (e.g., balanced truncation). However, model reduction for nonlinear dynamical systems has been primarily limited to methods based on the proper orthogonal decomposition (POD)–-Galerkin approach, which lacks `discrete optimality' and leads to unstable responses in many cases.
In this talk, I will present the Gauss--Newton with approximated tensors (GNAT) nonlinear model-reduction method. This method is discrete optimal, is equipped with an error bound, and leads to highly accurate responses for practical problems across a wide range of physics. I will also describe the `sample mesh' concept, which enables a practical, distributed, computationally efficient implementation of GNAT in computational-mechanics codes. Finally, I will present results for the method applied to a validated CFD model (with over 17 million unknowns) of a compressible, turbulent flow problem. Results illustrate GNAT’s favorable performance compared with other model-reduction techniques; it achieves speedups exceeding 350 with errors below 1%.


Khachik Sargsyan (Sandia): Bayesian compressive sensing and dimensionality reduction for high-dimensional models
In computationally intensive studies, such as model calibration and uncertainty quantification, surrogate models are usually employed instead of full physical models. However, surrogate construction for high-dimensional models is challenged in two major ways: a) obtaining sufficient number of training model simulations becomes prohibitively expensive, and b) non-adaptive surrogate basis selection rules lead to excessively large basis sets. To alleviate these difficulties, select state-of-the-art tools are ported from statistical learning to build efficient sparse surrogate representations, with quantified uncertainty, for high-dimensional complex models. Specifically, Bayesian compressive sensing techniques are enhanced by iterative basis growth and weighted regularization. Application to an 80-dimensional climate land model shows promising results, leading to efficient global sensitivity analysis and dimensionality reduction.


Samuel Skillman (SLAC): Galaxy Clusters: Plasma Physics Laboratories and a Grand Challenge for Computational Astrophysics
Galaxy clusters are unique astrophysical laboratories that contain many thermal and non-thermal phenomena that demand a coordinated effort from theory, numerical simulations, and observations.  After discussing a few of the major open questions in galaxy cluster formation and evolution, I’ll present our recent work in attempting to model the non-thermal cosmic-ray population present in clusters.  It is proposed that cosmic shocks that propagate through the intracluster medium form through the process of structure formation, and may be capable of accelerating charged particles through diffusive shock acceleration.  These relativistic particles decay and radiate through a variety of mechanisms, and have observational signatures in radio, hard X-ray, and Gamma-ray wavelengths.  Modelling these dynamics require a combination of cosmological hydrodynamics coupled with a model to follow the momentum-space distribution of cosmic ray electrons and protons. We have implemented such a model by combining Enzo (enzo-project.org), an Adaptive Mesh Refinement hydrodynamics + N-body particle-mesh gravity solver, with a numerical library for cosmic ray transport. I will end with a look towards the future of Enzo and yt (yt-project.org, an analysis and visualization tool).


Lixin Ge (SLAC): Shape Determination of Accelerator Cavities
Particle accelerators are complex scientific instruments for applications in energy, environment, industry, discovery science, and others. Superconducting technology is used as the accelerating scheme in many existing and future accelerators. In the design of superconducting cavities, the measured physical parameters very often differ from the ideal design values due to cavity deformation during the fabrication process. Using a shape determination algorithm to solve the unknown deviations from the ideal cavity based on measured data, the actual cavity shape can be reconstructed. The objective function in the algorithm is the weighted summation of the least-squares differences of the numerically computed and experimentally measured cavity data. The constraint is the Maxwell eigenvalue problem. The inversion variables are a set of parameters defining a perturbation from the designed cavity. The algorithm has been implemented in the parallel finite-element electromagnetic code suite ACE3P. Applications of the method to actual accelerator cavities will be presented.
Comments