Since I do a lot of development on my Apple laptop so it is necessary to get mpi4py working. These directions are currently for Mojave
1.Download install Xcode from the Apple Store.
2.Install the command line tools with command
$ xcode-select --install
[1/30/2020 - I'm no longer at UF, so these instructions will be kept around for a couple of years, but will not be updated.]
These directions change quite often due to support of the compile chain on hipergator. These directions were last updated on 19JAN2019.
Compilation should be done an development node.
$ module load ufrc
$ srundev --time=04:00:00
It is necessary to main consistent toolchains when using MPI. On hipergator, your compiler toolchain. For compatibility and consistency of the toolchain, for vasp and lammps, I currently prefer the following toolchain:
$ pip uninstall mpi4py
$ module load intel/2018.1.163
$ module load openmpi/3.1.2
$ pip install mpi4py
For the batch submission script on hipergator, you need the following lines
# load the necessary modules
module load intel/2018.1.163
module load openmpi/3.1.2
# changes to the open MPI environmental variables so it can run on hipergator
export OMPI_MCA_pml=^ucx
export OMPI_MCA_mpi_warn_on_fork=0
# need the pmi2 libraries because the pmix_v2 libraries don't work for some reason
srun --mpi=pmi2 python mc_iterative_sampler.py
where mc_interative_sampler.py is a python program which uses mpi4py. PYPOSPACK is dependent upon forks to run LAMMPS. To suppress these warning we need to set the environment variable, OMPI_MCA_mpi_warn_on_fork to 0.
You can check if mpi4py runs if (I have not tested if mpiexec will work on development nodes, but srun clearly does not):
srun -p hpg2-dev -n 8 --mem-per-cpu 6gb -t 60:00 --pty -u bash -i
mpiexec -n 8 python demo/helloworld.py
or
srundev -n 8 --mem-per-cpu 6gb -t 60:00
module load intel/2018.1.163
module load openmpi/3.1.2
mpiexec -n 8 python demo/helloworld.py
Dalcin, Lisandro D., et al. "Parallel distributed computing using python." Advances in Water Resources 34.9 (2011): 1124-1139.
Dalcín, Lisandro, et al. "MPI for Python: Performance improvements and MPI-2 extensions." Journal of Parallel and Distributed Computing 68.5 (2008): 655-662.
Dalcín, Lisandro, Rodrigo Paz, and Mario Storti. "MPI for Python." Journal of Parallel and Distributed Computing 65.9 (2005): 1108-1115.