MPI (Message Passing Inerface)[1] is a library specification for message passing standardized by committee of vendors, implementors, and users
Copy the hello.mpi file.
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
// Initialize the MPI environment
MPI_Init(&argc, &argv);
// Get the number of processes
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
// Get the rank of the process
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
// Print off a hello world message
printf("Hello, World! from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
// Finalize the MPI environment
MPI_Finalize();
return 0;
}
Load OpenMPI module:
module load OpenMPI/4.0.3-GCC-9.3.0
Compile the code:
mpicc -o hello hello.c
Execute using 4 MPI tasks (e.g. -n 4). For Markov cluster, you need to use different partition and class account (e.g. -A csds438 -p markov_cpu).
srun -n 4 hello
output:
Hello, World! from processor classct015, rank 0 out of 4 processors
Hello, World! from processor classct015, rank 1 out of 4 processors
Hello, World! from processor classct015, rank 2 out of 4 processors
Hello, World! from processor classct015, rank 3 out of 4 processors
References:
[1] MPI Standard: http://www.mcs.anl.gov/research/projects/mpi/