OpenMPI

OpenMPI

The Open MPI Project [1] is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners.

Important Notes

Installed Versions

All the available versions of OpeMPI for use can be viewed by issuing the following command. This applies for other applications as well.

module avail openmpi

output:

----------------------------------------------- /usr/local/share/modulefiles ------------------------------------------------

openmpi/1.8.8    openmpi/2.0.1 (L,D)

The default version is identified by "(default)" behind the module name and can be loaded as:

module load openmpi

(Note that OpenMPI is a default MPI in HPC, so you do not need to load default openmpi module)

The other versions of OpenMPI can be loaded as:

module load openmpi/<version>

Job Submission

Copy the example program "my_program.c" and a job file from /usr/local/doc/OPENMPI

cp /usr/local/doc/OPENMPI/* .

OR copy the following contents:

C code - my_program.c

//The parallel Hello World Program

#include<stdio.h>

#include "mpi.h"

int main(int argc, char *argv[])

{

int rank, size;

MPI_Init(&argc, &argv);

MPI_Comm_rank(MPI_COMM_WORLD, &rank);

MPI_Comm_size(MPI_COMM_WORLD, &size);

printf("Hello World from Node %d of %d\n", rank, size);

MPI_Finalize();

return 0;

}

Job script - mpijob.slurm:

#!/bin/bash

#SBATCH -N 2 -n 4 #using Two Nodes and 4 processors

#SBATCH --time=01:00:00

cp my_program.c $PFSDIR

cd $PFSDIR

#compile the C code

mpicc -o my_program my_program.c

#Execute MPI Job

srun ./my_program

cp -ru * $SLURM_SUBMIT_DIR

Interactive job Submission

Request a compute nodes (2 nodes and 4 processors)

srun -N 2 -n 4 --pty bash

Compiling

Check for the correct mpicc with "which" command:

  which mpicc

You should get (version may be different)

/usr/local/openmpi/1.6.3/bin/mpicc

Compile using "mpicc":

  mpicc -o my_program my_program.c

The example program is a "C" program. To compile programs in "C++" and "FORTRAN", use mpicc and mpif77/mpif90 respectively.

Executing

Execute:

srun --overlap ./my_program

You should get results in a file test.o<job_id> - the name of the file you assign in PBS script (#PBS -N test) with extension ".o<job_id>". The results will be similar to:

  Hello World from Node 3 of 4

  Hello World from Node 2 of 4

  Hello World from Node 0 of 4

  Hello World from Node 1 of 4

Batch Job Submission

Submit the job

 sbatch mpijob.slurm

Find the similar results at slurm-<jobid>.out.

References:

[1] OpenMPI Home: http://www.open-mpi.org/

[2] Sample Examples: visit: MPI Examples