Multiple Jobs On One Compute Node

If you have multiple programs to run that do not utilize all the cpus on a compute node you can run these programs in parallel to condense the number of cores your job needs. Refer to the following examples.

Also refer to the Job Arrays page, for related examples.

Single Node

3 single threaded programs running on one core

#!/bin/bash

#SBATCH --ntasks=3


./a.out &

./b.out &

./c.out &


wait

The ntasks flag defines the maximum number of tasks per job step. The default is 1, if this is not increased to the number of tasks you desire to run, only a single task will run at a time.

The ampersand after the command runs the program in the background so the next part of the script can begin executing without the program having to terminate first.

Array Job

4 single threaded programs, 10 sets of parameters, 2 cores used

#!/bin/bash

#SBATCH --ntasks=20

#SBATCH --array=1-2


if [ $SLURM_ARRAY_TASK_ID == 1 ]

   then

   PARAMS = (1.0 1.1 1.2 1.3 1.4)

elif [ $SLURM_ARRAY_TASK_ID == 2 ]

   then

   PARAMS = (2.0 2.1 2.2 2.3 2.4)

fi


for param in "${PARAMS[@]}"

do

   ./a.out $param &

   ./b.out $param &

   ./c.out $param &

   ./d.out $param &

done


wait

The flag array denotes how many nodes the sbatch script will be submitted to.

The variable SLURM_ARRAY_TASK_ID is used to differentiate between array jobs.

For more on Job Arrays, visit the Job Arrays page.