Message Passing Interface (MPI) is a standardized and portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran.
There are several well-tested and efficient implementations of MPI such as MPICH and OpenMPI, many of which are open-source or in the public domain.
Note before you start: The MPI standard is continuously evolving and new versions of the MPI standard release every couple of years. The examples and API used in some of the below lectures may be based on older MPI standard (instead of the most current one). In addition, some of them may be tailored for specific supercomputers and infrastructures that the authors/video creators are focussing on. Nevertheless, most MPI concepts remain the same and the below lectures are a good starting point for someone new to MPI
The MPI Forum is the standardization forum for the Message Passing Interface (MPI). The website contains the MPI standard documents and information about the activities of the MPI forum.
Software such as MPICH, OpenMPI are implementations of the MPI standard,
Title, Presenters: This short video titled "Introducing MPI" is created by the SoftwareCarpentry group.
Scope: This video gives a short introduction to MPI. It was uploaded several years back but is still relevant to get a general idea about MPI
"Course Overview" talk gives a summary of what MPI topics are covered in the playlist
This segment deals with MPI concepts
This talk is an introduction to MPI
Creators: There is a series of lectures titled "Message Passing Programming with MPI", created by ARCHER Service Group. ARCHER is UK National Supercomputing Service
Scope: There are a series of lectures spanning various concepts of MPI. Many aspects of these lectures focus on the Cray supercomputer supported by ARCHER. A playlist of the lectures can be found here. Lectures cover MPI concepts, introduction to MPI. examples, point-to-point and collective communication, blocking and non-blocking communication, tags and communicators, derived data types, virtual topologies and some more.
NOTE: Please access ARCHER playlist for complete list.
Title, Presenters: This talk, titled "MPI for Scalable Computing", was presented by Dr. Balaji, Dr. Thakur and Dr. Gropp at "Argonne Training Program on Extreme-Scale Computing 2017".
Scope: This talk assumes listeners have some MPI understanding. It focusses more on MPI concepts (along with scalability and performance implications) rather than coding. A good talk if you have some MPI understanding
Slides for this talk can be found here
Title, Presenters: This talk, titled "MPI for Scalable Computing", was presented by Dr. Dr. Gropp at "Argonne Training Program on Extreme-Scale Computing 2016".
Scope: This talk assumes listeners have some MPI understanding. It focusses more on MPI one-sided communication.
Slides for this talk can be found here
Title, Presenters: This talk, titled "MPI for Scalable Computing", was presented by Dr. Thakur, Dr. Gropp and Dr. Balajji at "2018 Argonne Training Program on Extreme-Scale Computing (ATPESC)".
Scope: This talk assumes listeners have some MPI understanding. It focuses more on MPI one-sided communication.
Slides for this talk can be found here
MPICH is one of the MPI implementations from Argonne National Laboratory. This wiki page has MPICH Developer Documentation.
This wiki hosts most of the MPICH developer documentation, along with other random content as appropriate.
Book Title: Using MPI: Portable Parallel Programming with the Message-Passing Interface (Scientific and Engineering Computation) third edition Edition.
Authors: William Gropp (Author, Series Editor), Ewing Lusk (Author, Series Editor), Anthony Skjellum (Author)
Scope: This book is a guide to parallel programming with MPI, reflecting the latest specifications, with many detailed examples.