SLURM salloc

Submitting Interactive Jobs

Running Cluster Interactive Jobs Using salloc

You can run interactive processes.

From a Coeus login node, follow this simple shell example to allocate a GPU node in Coeus for interactive usage for 1 hour.

[user@login1 ~]$ salloc --time=1:00:00 --partition=gpu --nodes=1

[user@login1 ~]$ squeue

(shows which node and jobID your request was assigned. In this case gpu01 and 28628)

[user@login1 ~]$ ssh gpu01

(once logged into a compute node you can run all commands manually during the duration of your allocation)

[user@gpu01 ~]$ module load General/matlab2018a

[user@gpu01 ~]$ matlab

...

>> quit % or CTRL-D

[user@gpu01 ~]$ exit # or CTRL-D

[user@login1 ~]$ scancel 28628

(cancels the allocation)


If you want to specify GPU model (Coues cluster has 10 GPU nodes, 5 nodes with 4xNvidia a50 and 5 nodes with 4xNvidia RTX5000) you can use "--gres=gpu:<gpu type>:<gpu count>" flag where <gpu type> is either a40 or rtx and <gpu count> is number between 1 and 4.

[user@login1 ~]$ salloc --partition=gpu --time=1:00:00 --gres=gpu:rtx:2

salloc: Granted job allocation 5552730

[user@login1 ~]$ ssh gpu05


Simple command line Matlab example:

$ module load General/matlab2017a

$ salloc --time=1:00:00 --partition=himem --nodes=1 srun matlab

(you will now be at the Matlab CLI prompt)

>>

>> quit

salloc: Relinquishing job allocation 28634

(the allocation is removed when you exit Matlab)


Shorthand options example (these two commands are equivalent):

$ salloc -N 1 -p short srun --pty bash


$ salloc --nodes=1 --partition=short srun --pty bash


Tmux session example:

$ salloc --time=1:00:00 --nodes=1

$ ssh -X compute002

$ tmux new -s matlab

(in the tmux session)

$ module load General/matlab2017a

$ matlab &

(Detach the session. https://gist.github.com/MohamedAlaa/2961058)

CTRL-b d


$ ssh -X compute003

$ tmux new -s matlab

(in the tmux session)

$ module load General/matlab2017a

$ matlab &

(Detach the session. https://gist.github.com/MohamedAlaa/2961058)

CTRL-b d