Updated configurations and new MPI libraries (24/6/2015):
In order to achieve good performance with multiple MPI libraries the slurm configuration have been changed such that nodes now are listed as having 8 sockets and 8 cores pr. sockets. This means in case your jobs look like:
#SBATCH --sockets-per-node 4
#SBATCH --cores-per-socket 16
They should be changed to:
#SBATCH --sockets-per-node 8
#SBATCH --cores-per-socket 8
In case you need to take advantage of the distance between the individual numa nodes this can be done by using hwloc library.
As a bonus the mvapich2 MPI library is now available in /pack/mvapich2. The current version is compiled to use verbs but I'm currently working on getting a PSM version working.