Some deployments of grid engine.
Grid engine (GE) use $S flag to define the letter sequence in the script which indicates additional option for submitting the job.
<job-name>.o<job-id><job-name>.e<job-id>Sun Grid Engine Queue Configuration, called qconf, allows the administrator to add, delete, modify the grid engine configuration.
Show all parallel environments:
$ qconf -splimpi16ppnimpi28ppnmpimpi-rrmpi10ppnmpi12ppnmpi14ppnmpi16ppnmpi18ppnmpi20ppnmpi22ppnmpi24ppnmpi26ppnmpi28ppnmpi2ppnmpi4ppnmpi8ppnmpichmpifillmpirrmvapichortesmpShow details of parallel environment, for example smp.
$ qconf -sp smppe_name smpslots 496user_lists NONExuser_lists NONEstart_proc_args /bin/truestop_proc_args /bin/trueallocation_rule $pe_slotscontrol_slaves FALSEjob_is_first_task TRUEurgency_slots minaccounting_summary TRUE#!/bin/bash#$ -S /bin/sh#$ -cwd#$ -j y#$ -m be#$ -M rangsiman1993@gmail.com#$ -pe mpifill 28#$ -l h_vmem=2G#$ -l hostname=compute-0-0#$ -V#$ -o orca-log#$ -e orca-errorexport job="water-dlpno-ccsd-t"module purgemodule load gcc-7.2.0# Setting OpenMPIexport PATH="/home/rangsiman/.openmpi/bin/":$PATHexport LD_LIBARY_PATH="/home/rangsiman/.openmpi/lib/":$LD_LIBARY_PATHexport OMP_NUM_THREADS=1# Setting ORCa directory pathorcadir="/home/rangsiman/orca_4_1_0_linux_x86-64_openmpi313"export PATH="$orcadir":$PATH#ORCA=`which orca`# Setting communication protocalexport RSH_COMMAND="/usr/bin/ssh -x"# Creating local scratch folder for the user on the computing node.# /lustre/$USER/scratch directory must exist.if [ ! -d /lustre/$USER/scratch ]then mkdir -p /lustre/$USER/scratchfitdir=$(mktemp -d /lustre/$USER/scratch/orcajob__$JOB_ID-XXXX)# Copy only the necessary stuff in submit directory to scratch directory.# Add more here if needed.cp $SGE_O_WORKDIR/${job}.inp $tdir/cp $SGE_O_WORKDIR/*.gbw $tdir/cp $SGE_O_WORKDIR/*.xyz $tdir/# Creating nodefile in scratchcat $PE_HOSTFILE > $tdir/${job}.nodes# cd to scratchcd $tdir# Copy job and node info to beginning of outputfileecho "Job execution start: $(date)" >> $SGE_O_WORKDIR/${job}.outecho "Shared library path: $LD_LIBRARY_PATH" >> $SGE_O_WORKDIR/${job}.outecho "SGE Job ID is : ${JOB_ID}" >> $SGE_O_WORKDIR/${job}.outecho "SGE Job name is : ${JOB_NAME}" >> $SGE_O_WORKDIR/${job}.outecho "" >> $SGE_O_WORKDIR/${job}.outcat $PE_HOSTFILE >> $SGE_O_WORKDIR/${job}.out# Start ORCA job. ORCA is started using full pathname (necessary for parallel execution).# Output file is written directly to submit directory on frontnode.$orcadir/orca $tdir/${job}.inp >> $SGE_O_WORKDIR/${job}.out# ORCA has finished here. Now copy important stuff back (xyz files, GBW files etc.).# Add more here if needed.cp $tdir/*.gbw $SGE_O_WORKDIRcp $tdir/*.xyz $SGE_O_WORKDIRJust simply use command qsub
$ qsub your-sge-script.shRangsiman Ketkaew