Cluster Scripts

Here we list the default scripts for the computers that we use. You can literally cut and paste these on the computing resource you are on and edit the sections in square brackets, then submit with qsub < job.script

MOUNTAINEER - single job


#!/bin/bash

#PBS -q long

#PBS -l nodes=1:ppn=1

#PBS -l walltime=10:00:00

#PBS -m ae

#PBS -M [EMAIL@ADDRESS]

#PBS -N [NAME]


source ~/.bashrc


WORKDIR=[DIRECTORY]

cd $WORKDIR

#link in the Fdata

        ln -s [FDATA LOCATION] ./Fdata

# Call the executable

       ./data/lewis/fireball2009/bin/fireball.x > output.log


[EMAIL@ADDRESS] - replace with your email address (no []).
[NAME] - the name of this calculation.
[DIRECTORY] - directory you are running the code in
[FDATA LOCATION] - location of the Fdata you are using

Optional changes:
#PBS -q long - change "long" if you need to change the queue you use
#PBS -l nodes=1:ppn=1  - this is the number of nodes and the number of processors your job requires
#PBS -l walltime=10:00:00  - this is a best guess at the MAXIMUM time the job should take to run.

MOUNTAINEER - Batch job

This example assumes you are running a Delafossite with Primbas High-Throughput calculation. However, it can be modified for any purpose:

#!/bin/bash
#PBS -q week
#PBS -l nodes=1:ppn=1
#PBS -l walltime=160:00:00
#PBS -t 1-[# of calculations]
#PBS -m ae
#PBS -M [EMAIL@ADDRESS]
#PBS -N [NAME].${PBS_ARRAYID}

source ~/.bashrc

WORKDIR=[DIRECTORY]
cd $WORKDIR
mkdir ${PBS_ARRAYID}
cd ${PBS_ARRAYID}
cp ../blank/* .
#link in the Fdata
                ln -s [FDATA LOCATION] ./Fdata 
# Link to the executible
                ln -s /data/lewis/fireball2009/bin/fireball.x ./fireball.x

# Call the Lvs and bas maker
                /data/lewis/fireball2009/DelafossiteTools/Primbas_Suite/V2MakePrimBas/V2MakePrimBas.x ${PBS_ARRAYID}

# Call the executible
       ./fireball.x > runlog.log


As before:

[EMAIL@ADDRESS] - replace with your email address (no []).
[NAME] - the name of this calculation.
[DIRECTORY] - directory you are running the code in.
[# OF CALCULATIONS] - the number of calculations you are running
[FDATA LOCATION] - location of the Fdata you are using

Optional changes:
#PBS -q long - change "long" if you need to change the queue you use
#PBS -l nodes=1:ppn=1  - this is the number of nodes and the number of processors your job requires
#PBS -l walltime=10:00:00  - this is a best guess at the MAXIMUM time the job should take to run.



TITAN

This is my current script for Titan, it calls the Pyroblast executable:

#!/bin/bash
#    Begin PBS directives
#PBS -A mat047
#PBS -N [NAME]
#PBS -j oe
#PBS -l walltime=1:00:00,nodes=2
#PBS -q debug
#    End PBS directives and begin shell commands

# set intel environment
module swap PrgEnv-pgi PrgEnv-intel
cd /lustre/widow1/scratch/$USER/[DIRECTORY]
date

aprun -n500 -S4 -d2 -m8G /lustre/widow1/proj/mat047/widow0-20130305/fireball2009/PyroBlast_Readonly/fireball_pyroblast.x


As before:

[NAME] - the name of this calculation.
[DIRECTORY] - directory you are running the code in.


Optional changes:
#PBS -q debug - change "long" if you need to change the queue you use
#PBS -l walltime=1:00:00,nodes=2  - this is a best guess at the MAXIMUM time the job should take to run, followed by the number of nodes and the number of processors your job requires

It's a good idea to read up on how the NUMA nodes work on Titan. The execution line above says to use 4 cores per node (-S4) and to separate the runs out (-d2), with 8Gb of RAM per node (-m8G). 

The explanation of the aprun command-line options are here: