⟨Math┃NCTS┃Phys⟩ HPC School
⟨Math┃NCTS┃Phys⟩ HPC School
Parallel Finite Element Method using Supercomputer
Application/Registration
Please register and apply online before Jan 21th, 2022: https://forms.gle/4oMZcu22T1CeF2j37
(The number of participants is limited. You will receive a notification if your application is accepted.)
Registration Participants: https://docs.google.com/spreadsheets/d/1apb6L-m2LA9KSvxuzqMwyWdCwbsIISzTAmMv37hlFPc/edit?usp=sharing
Announcements
TBA
Course Handouts
TBA
Overview
This 5-day intensive "online” class provides introduction to large-scale scientific computing using the most advanced massively parallel supercomputers. Topics cover:
Finite-Element Method (FEM)
Message Passing Interface (MPI)
Parallel FEM using MPI and OpenMP
Parallel Numerical Algorithms for Iterative Linear Solvers
Several sample programs will be provided and participants can review the contents of lectures through hands-on-exercise/practices using the Oakbridge-CX system at the University of Tokyo (https://www.cc.u-tokyo.ac.jp/en/supercomputer/obcx/service/).
Finite-Element Method is widely-used for solving various types of real-world scientific and engineering problems, such as structural analysis, fluid dynamics, electromagnetics, and etc. This lecture course provides brief introduction to procedures of FEM for 1D/3D steady-state heat conduction problems with iterative linear solvers and to parallel FEM. Lectures for parallel FEM will be focused on design of data structure for distributed local mesh files, which is the key issue for efficient parallel FEM. Introduction to MPI (Message Passing Interface), which is widely used method as "de facto standard" of parallel programming, is also provided.
Solving large-scale linear equations with sparse coefficient matrices is the most expensive and important part of FEM and other methods for scientific computing, such as Finite-Difference Method (FDM) and Finite-Volume Method (FVM). Recently, families of Krylov iterative solvers are widely used for this process. In this class, details of implementations of parallel Krylov iterative methods are provided along with parallel FEM.
Moreover, lectures on programming for multicore architectures will be also given along with brief introduction to OpenMP and OpenMP/MPI Hybrid Parallel Programming Model.
Prerequisites
Experiences in Unix/Linux (vi or emacs)
➢ List of Unix/Linux Commands (Wikipedia)
✧ https://en.wikipedia.org/wiki/List_of_Unix_commands
➢ Onlie Manuarl for Emacs (Screen Editor for Linux/Unix)
✧ https://www.gnu.org/software/emacs/manual/
Experiences in programming by Fortran or C/C++
Undergraduate-Level Mathematics and Physics (e.g. Linear Algebra, calculus)
Fundamental numerical algorithms (Gaussian Elimination, LU Factorization, Jacobi/Gauss-Seidel/SOR Iterative Solvers, Conjugate Gradient Method (CG))
Experiences in SSH Public Key Authentication Method (optional)
Participants are encouraged to read the following material, and to understand fundamental issues of the MWR (Method of Weighted Residual) before this course.
Preparation for PC
➢ Cygwin with gcc/gfortran and OpenSSH
➢ ParaView
MacOS, UNIX/Linux
➢ ParaView
Cygwin: https://www.cygwin.com/
ParaView: http://www.paraview.org
Schedule
February 10, 2022 (Thu 9:10-17:00)
09:10-10:00 Introduction (1/2)
10:10-11:00 Introduction (2/2)
11:10-12:00 FEM (1/6)
13:10-14:00 FEM (2/6)
14:10-15:00 FEM (3/6)
15:10-16:00 FEM (4/6)
16:10-17:00 Exercise (Optional)
February 11, 2022 (Fri 9:10-17:00)
09:10-10:00 FEM (5/6)
10:10-11:00 FEM (6/6)
11:10-12:00 Exercise
13:10-14:00 Parallel FEM
14:10-15:00 Login to OBCX
15:10-16:00 MPI (1/6)
16:10-17:00 Exercise (Optional)
February 12, 2022 (Sat 9:10-17:00)
09:10-10:00 MPI (2/6)
10:10-11:00 MPI (3/6)
11:10-12:00 Exercise
13:10-14:00 MPI Practice (1/3)
14:10-15:00 MPI (4/6)
15:10-16:00 MPI (5/6)
16:10-17:00 Exercise (Optional)
February 19, 2022 (Sat 9:10-17:00)
09:10-10:00 MPI (6/6)
10:10-11:00 Exercise
11:10-12:00 Exercise
13:10-14:00 MPI Practice (2/3)
14:10-15:00 MPI Practice (3/3)
15:10-16:00 Exercise
16:10-17:00 Parallel FEM (1/4)
February 20, 2022 (Sun 9:10-17:00)
09:10-10:00 Parallel FEM (2/4)
10:10-11:00 Parallel FEM (3/4)
11:10-12:00 Parallel FEM (4/4)
13:10-14:00 Exercise
14:10-15:00 OpenMP/MPI Hybrid (1/2)
15:10-16:00 OpenMP/MPI Hybrid (2/2)
16:10-17:00 Exercise (Optional)
Instructor
Materials
http://nkl.cc.u-tokyo.ac.jp/NTU2020/ (Short Course in February 2020 at NTU)
http://nkl.cc.u-tokyo.ac.jp/20w/ (Lectures at the University of Tokyo (on-line))
http://nkl.cc.u-tokyo.ac.jp/files/fem-f.tar (Sample 1D/3D Program in Fortran)
http://nkl.cc.u-tokyo.ac.jp/files/fem-c.tar (Sample 1D/3D Program in C)
Sponsors: MOST, NCTS-Math, NCTS-Phys
Co-Sponsors: Information Technology Center (Univ. Tokyo)
Organizers:
Chen, Pochung 陳柏中 (Department of Physics, NTHU)
Huang, Tsung-Ming 黃聰明 (Department of Mathematics, NTNU)
Kao, Ying-Jer Kao 高英哲 (Department of Physics, NTU)
Wang, Weichung 王偉仲 (Institute of Applied Mathematical Sciences, NTU)
Contact: Ho, Renee 何婷芬 renee@phys.ncts.ntu.edu.tw