To know different parallel programming techniques for efficient programming.
To understand the aspects of parallel programming design.
To learn the use of Message Passing Interface in developing parallel programs.
To apply parallel programming constructs and develop efficient applications.
Write parallel programs for a parallel system.
Design, formulate, solve and implement high-performance versions of the standard single-threaded algorithms.
Design and deploy large-scale parallel programs on parallel systems using the message passing paradigm.
Apply parallel programming techniques to develop applications.
Hardware evolution: Superscalar architectures & Multi-core architecture, Limitations of Memory, Dichotomy of Parallel Computing Platforms
Software Evolution: Concept of Serial program, Concept of parallelism & parallel program, Significance of HPC, Performance Metrics for Parallel Systems – SpeedUp, Execution time, total parallel overhead, response time, efficiency, and cost.
Overview of OpenMP
Applications of HPC: Applications, Case Study
Preliminaries, Decomposition Techniques,
Characteristics of Tasks and Interactions, Mapping Techniques for Load Balancing,
parallel computing models
Shared-Memory Programming:
The Shared-Memory Model, Parallel for Loops, private variables, first private lastprivate, critical Pragma, parallel Pragma, omp_get_num_threads , for, single Pragma, nowait Clause.
Message-Passing Programming:
The effect of Granularity on Performance, Scalability of Parallel Systems
The MPI-Model, MPI- Interface, MPI_Init, MPI_Comm_rank, MPI_Comm_size, MPI_Finalize, Compiling MPI, Running MPI, machine File Concept, MPI_Reduce, Benchmarking Parallel Performance MPI_Wtime, and MPI_Wtick, MPI_Barrier
Applications of Parallel Programming:
Types of software profilers ,Gprof, Optimizing Compilers, Optimizing specific Processors.
CUDA, Use of Parallel programs in various fields like Data analytics, network computing, etc.