#SBATCH -A mendoza_q
#SBATCH --exclude=amr-163,amr-178,amr-179
this should submit to our queue, exclude your buyin nodes and flip the jobs to run in the general partition, avoiding incurring cpu.
If you want an example of all the files needed to run a test calculation, you can download them from here.
Depending on the version of VASP you want to run, you might need to constrain what nodes to run your job. The newer 2023 install works on all nodes except intel16, while the 2022 version only works on intel16.
For a job script example running VASP (V2023) in parallel, you can use the following script:
#!/bin/bash --login
#SBATCH --ntasks=32
#SBATCH -N 1-2
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=5G
#SBATCH --time=7-00:00:00
#SBATCH -A general # or mendoza_q
#SBATCH --job-name VASP_parallel
#SBATCH --constraint=[amd22|amd24|intel18|amd21|intel21]
export JOB=VASP_parallel
export DIR=$SLURM_SUBMIT_DIR
module purge
module load VASP/6.2.1-nvofbf-2023.07-OpenMP
mpirun -n 32 vasp_std > vasp_new.out
For a job script example running VASP (V2022) in parallel, you can use the following script:
#!/bin/bash --login
#SBATCH --ntasks=32
#SBATCH -N 1-2
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=5G
#SBATCH --time=7-00:00:00
#SBATCH -A general # or mendoza_q
#SBATCH --job-name VASP_parallel
#SBATCH --constraint=[intel16]
export JOB=VASP_parallel
export DIR=$SLURM_SUBMIT_DIR
module purge
module load VASP/6.2.1-nvofbf-2022.07-OpenMP
mpirun -n 32 vasp_std > vasp_new.out
For a job script example running VASP (either V2023 or V2022, you do not care) in parallel, you can use the following script:
#!/bin/bash --login
#SBATCH --ntasks=32
#SBATCH -N 1-2
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=5G
#SBATCH --time=7-00:00:00
#SBATCH -A general # or mendoza_q
#SBATCH --job-name VASP_parallel
#SBATCH --constraint=[intel16|amd22|amd24|intel18|amd21|intel21]
export JOB=VASP_parallel
export DIR=$SLURM_SUBMIT_DIR
module purge
if [ "$HPCC_CLUSTER_FLAVOR" = "intel16" ]; then
module load VASP/6.2.1-nvofbf-2022.07-OpenMP
else
module load VASP/6.2.1-nvofbf-2023.07-OpenMP
fi
mpirun -n 32 vasp_std
Besides the memory and the walltime request, you can also change the number of tasks and number of nodes in the first and second #SBATCH lines, respectively. If you want to run on the mendoza partition, change general-long to mendoza_q.
Note srun should work, but if you are having issues, try mpirun -n 32 vasp_std > vasp_new.out
Keep in mind that the -n 32 here should match the --ntasks
#!/bin/bash --login
#SBATCH --ntasks=32
#SBATCH -N 1-2
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=5G
#SBATCH --time=7-00:00:00
#SBATCH -A general # or mendoza_q
#SBATCH --job-name IrMn3_test
#SBATCH --output=job.out
export JOB=IrMn3_test
export DIR=$SLURM_SUBMIT_DIR
export scratch=$SCRATCH/vasp
module purge
module load VASP/6.2.1-nvofbf-2022.07-OpenMP
echo "scratch job directory: "
echo $scratch/$JOB
mkdir -p $scratch/$JOB
cp $DIR/* $scratch/$JOB/
cd $scratch/$JOB
echo -e "slurm submission directory:\n$SLURM_SUBMIT_DIR" > SUBMITDIR
srun vasp_std > vasp.out
succeed=$?
cp OUTCAR ${DIR}/OUTCAR
cp CONTCAR ${DIR}/CONTCAR
cp vasp.out ${DIR}/vasp.out
if [ $succeed -eq 0 ]; then
cd ${DIR}
rm -rf $scratch/$JOB/WAVECAR
echo "$JOB finished successfully"
fi
To see all the versions installed for VASP in our cluster, you can use:
> module spider vasp
To use it, please use the following command to load the module:
module load VASP/6.2.1-nvofbf-2022.07-OpenMP
After that you can use vasp_gam, vasp_ncl, vasp_std (the default is vasp_std) to run VASP.
The GPU version of VASP/6.2.1 is also installed
[From one of our technicians]
I also compiled VASP/6.2.1 with intel and CUDA compilers. You can also load the version with the command:
VASP/6.2.1-nvofbf-2022.07-CUDA-11.7.0
After that you can use vasp_gpu, vasp_gpu_ncl, vasp_gam, vasp_ncl, vasp_std (the default is vasp_std or vasp_gpu) to run VASP.
All of the compilations have the interface to Wannier90/3.1.0.
Our research space is:
/mnt/research/mendozacortes_group
See the wiki page for how to use research space:
Update (2024/02/19): from ICER's old wiki.hpcc.msu.edu is now: https://docs.icer.msu.edu/
Thus the links for this might need to be checked: https://wiki.hpcc.msu.edu/display/ITH/Research+Space
Our scratch space is
/mnt/scratch/jmendoza
See the wiki page for more information about the scratch space: https://wiki.hpcc.msu.edu/display/ITH/Scratch+File+Systems
For the HPCC/ICER, you can start with:
https://wiki.hpcc.msu.edu/display/TEAC/Introduction+to+HPCC
https://icer.msu.edu/sites/default/files/Introductory%20Supercomputing.pdf
Partition names:
https://wiki.hpcc.msu.edu/pages/viewpage.action?pageId=18972892
How to submit jobs (SLURM scheduler):
https://wiki.hpcc.msu.edu/display/ITH/Job+Management+by+SLURM
SLURM commands/dependencies
https://wiki.hpcc.msu.edu/display/TEAC/List+of+Job+Specifications
The following is a list of basic #SBATCH specifications. To see the complete options of SBATCH, please refer to the SLURM sbatch command page.
To visualize and modify your structures you can use VESTA: https://jp-minerals.org/vesta/en/download.html
For more instructions about VESTA you can use: https://www.youtube.com/channel/UCmOHJtv6B2IFqzGpJakANeg/videos
As explained below, VASP calculations require 4 input files: INCAR, KPOINTS, POSCAR, and POTCAR.
One needs to create a separate directory for each calculation and copy the input files to the directory. A brief introduction to the files:
(1) INCAR: All calculations set ups such as energy and force convergence, single point energy calculation, optimization, or MD; 2D vs 3D calculations etc. are defined here. Most keywords are explained (commented). If you want to know about a particular keyword you could search for it in VASP WIKI documentation. Just type the word and VASP in google and it will take you to many online sources explaining it. VASP has been popular among researchers for many years. So, it is easy to find the explanations from online groups.
(2) POSCAR: We define atomic positions and lattice vectors here. Note that the atomic positions are in crystal units. To convert it, I usually open the file in VESTA and export again as a POSCAR file, but now in angstrom unit. I don't have a script to automate it. But, will be useful when planning to run several calculations in one go.
(3) KPOINTS; Typically I use gamma-point centered high symmetry points in Brillouine zone. Monkhorst-Pack grids. You may change it according to the problem.
(4) POTCAR: Potential files are distributed along with the VASP software and license. I use GGA-PBE PAW pseudo potentials. This is saved as default POTCAR file in the VASP pseudopotential directory in hpc. Once you open POTCAR file, you could see the explanation in the first few lines.
If the system has more than one unique element, you need to copy the POTCAR file for each element and concatenate all of them in the same order as you have it in POSCAR file. To highlight the difference for a system with more than one unique element, I have attached a file named POSCAR_2.
type module show to get the path for vasp installation and pseudopotential (POTCAR) files.
I feel PBE PAW pseudopotentials should be good for most purposes. Test different convergence criteria for your system before starting the production run.
To concatenate POTCAR files, use the command, cat ~/pot/Al/POTCAR ~/pot/C/POTCAR ~/pot/H/POTCAR >POTCAR (this also shows that you don't need to coy all POTCAR files into the working directory. Instead, you could directly create the final POTCAR file in one go).
(5) I haven't exported the wave functions files from VASP for any specific purpose. So, I cannot help you with that for now. But, you may check online for guidance.
VASP creates a lot of scratch files. So, it can use all the memory very quickly. If you need to run many calculations in one go, plan ahead to delete unnecessary files.
VASP needs 4 input files to run. INCAR, POSCAR, POTCAR, and KPOINTS.
If you want an example of all the files needed to run a test calculation, you can download them from here.
However, below is an example of VASP input files, just in case the link above does not work.
# INCAR file
SYSTEM = graphene
# Hamiltonian
# LDA calculation is the default one?
# GGA calculation
GGA = PE
NCORE=8 # equal to the Number of core each nodes for better calculation speed
# vasp 5.2.11 setting
LREAL =.FALSE. accurate energy calculation
#PREC = Med precision of calculation
NELM = 60 max number of 1SCF cycles
EDIFF = 1E-04 stopping criterion
ISMEAR = -5 big crystal use 0, default -5 for insulator and semiconductor
#SIGMA= 0.005 small for insulator
#ALGO = Very Fast
ISPIN = 2
# run from scratch
ISTART = 0 # starts from scratch,
ICHARG = 1
#INIWAV = 1
# run from checkpoint
#ISTART = 2
# no. of bands
#NBANDS = 944 for primitive cells !
#NUPDOWN = 0
# Electronic relaxation
# usual basis size
ENCUT = 600.00 eV plane wave cut-off
ENAUG = 1200.00 eV charge augmentation cut-off
# Ionic relaxations
NSW = 0 max number of movements
IBRION = 2
ISIF = 3
NELMIN =3
EDIFFG = -2E-03
# Other parameters
ISYM = 2 soft symmetrization
NWRITE = 2 most of the important information is written
SYMPREC = 0.001
# vdw interactions
IVDW= 12 # Van der Waals dispersion D3 with BJ damping. Available in VASP 5.3 or newer
Graphene POSCAR
1.00000000000000
2.4670806653457333 0.0000000000000000 0.0000000000000000
1.2335403326728667 2.1365545293748203 0.0000000000000000
0.0000000000000000 0.0000000000000000 14.0000000000000000
C
2
Direct
0.3333333333333357 0.3333333333333357 0.5000000000000000
0.0000000000000000 0.0000000000000000 0.5000000000000000
Automatic mesh
0 ! number of k-points = 0 ->automatic generation scheme
Gamma ! fully automatic: MP-set
15 15 1 !
We need to cat all the pseudopotential of all atoms in the POSCAR into file POTCAR.
In the HPCC, the POTCARs will be in:
/mnt/research/mendozacortes_group/VASP6/potpaw_LDA.54.tar_0_2.gz or
/potpaw_PBE.54.tar_0_0.gz
You will need to unzip these tarball
e.g. /opt/software/VASP/6.2.1-intel-2020b/pot/PBE
or for example if the path for the potentials are in ~:
cat ~/pot/Al/POTCAR ~/pot/C/POTCAR ~/pot/H/POTCAR >POTCAR
or more info at:
https://www.vasp.at/wiki/index.php/POTCAR
Please copy these scripts from /mnt/home/djokicma/bin
Once these are all set up you can also easily access the scripts by creating an alias in your bashrc:
# Scrips for VASP
alias POTCAR_create_from_POSCAR='~/bin/POTCAR_create_from_POSCAR'
alias mkdir_VASP='~/bin/mkdir_VASP'
alias POTCAR_create='~/bin/POTCAR_create'
This is a script which will automate the creation of the whole input folder, prompting the user for details of the calculation.
In order to use this script, please make sure that scripts 1, 2, and 3 are setup with appropriate paths, including a template folder which can be copied from /mnt/home/djokicma/VASP/Standard, and a POTCAR folder from /mnt/home/djokicma/VASP/References/POTCARs
#!/bin/bash
any_int='^[0-9]+$'
any_int_range='^[0-9]+-?[0-9]*$'
FinalReminder="Reminders:\n"
UserInputHistory=""
BadInput="I do not know how to interpret this input. Script terminating."
StandardInputFileDirectory="/mnt/home/djokicma/VASP/Standard"
echo -n "Do you want to use your current directory? (1/0) "
read -n 1 UseThisDir ; echo ""
UserInputHistory="${UserInputHistory}${UseThisDir}"
if [ "$UseThisDir" == 1 ]; then
MyDir=$PWD
elif [ "$UseThisDir" == 0 ]; then
# The following lines have unexpected behavior:
# 1) If there are no subdirectories it produces a ls error message
# 2) If you press enter without specifying a new directory, the rest of the script doesn't work
# Update 2021/12/1: I believe I fixed the above unexpeced behavior. Delete these comments if no issues appear by 2022/1/1
ChildDirectoryCount=$(find $PWD -mindepth 1 -maxdepth 1 -type d | wc -l)
if [ "$ChildDirectoryCount" -gt 0 ]; then
echo "For reference, the subdirectories in your current directory are: "
ls */ -d
else
echo "There are no subdirectories in your current directory."
fi
echo -n "Specify a new or existing directory: "
while [ -z "${MyDir}" ]; do
read -e MyDir
UserInputHistory="${UserInputHistory}${MyDir}\n"
done
if [ ! -d "$MyDir" ]; then
echo "$MyDir does not exist. Creating now."
mkdir $MyDir
fi
cd $MyDir
else
echo $BadInput
exit 1
fi
echo "The working directory is $MyDir"
parent=$(dirname $PWD); parent="${parent##*/}"
MyDirPretty="${MyDir##*/}"
ParentTitle="$(grep -s "JOB=" ../submission.slurm | cut -f 2 -d = )"
CurrentTitle="$(grep -s "JOB=" submission.slurm | cut -f 2 -d = )"
echo "Title suggestions: ${parent}_${MyDirPretty} ${ParentTitle}_${MyDirPretty} ${CurrentTitle}"
echo -n "What is the title of this calculation? "
read -e Title
UserInputHistory="${UserInputHistory}${Title}\n"
if [ "$Title" == 1 ]; then
Title="${parent}_${MyDirPretty}"
elif [ "$Title" == 2 ]; then
Title="${ParentTitle}_${MyDirPretty}"
elif [ "$Title" == 3 ]; then
Title="${CurrentTitle}"
fi
echo "The Title is $Title"
# POSCAR section
if [ -f "POSCAR" ]; then
echo -n "Do you want to use the POSCAR already in ${MyDir}? (1/0) "
read -n 1 UseThisPOSCAR ; echo ""
UserInputHistory="${UserInputHistory}${UseThisPOSCAR}"
if [ "$UseThisPOSCAR" == 1 ]; then
MyPOSCAR="POSCAR"
elif [ "$UseThisPOSCAR" != 0 ]; then
echo $BadInput
exit 1
fi
else
UseThisPOSCAR=0
fi
if [ "$UseThisPOSCAR" == 0 ]; then
echo -n "Specify a path to a POSCAR: "
read -e MyPOSCAR
UserInputHistory="${UserInputHistory}${MyPOSCAR}\n"
if [ ! -f "$MyPOSCAR" ]; then
echo "Cannot find a POSCAR at ${MyPOSCAR}. Script terminating."
exit 1
fi
cp $MyPOSCAR ./POSCAR
fi
sed -i "s/^M//g" POSCAR
sed -i "1 s/^.*$/${Title}/" POSCAR
# POTCAR section
~/bin/POTCAR_create_from_POSCAR
mv POTCAR_* POTCAR
# INCAR section
if [ -f "INCAR" ]; then
echo -n "Do you want to use the INCAR already in ${MyDir}? (1/0) "
read -n 1 UseThisINCAR ; echo ""
UserInputHistory="${UserInputHistory}${UseThisINCAR}"
if [ "$UseThisINCAR" == 1 ]; then
MyINCAR="INCAR"
elif [ "$UseThisINCAR" != 0 ]; then
echo $BadInput
exit 1
fi
else
UseThisINCAR=0
fi
if [ "$UseThisINCAR" == 0 ]; then
echo "Select a standard INCAR or non-standrd INCAR format."
echo -n "(4:EFIELD / 3:Bader / 2:single-point / 1:standard / 0:non-standard format) "
read -n 1 UseStnINCAR ; echo ""
UserInputHistory="${UserInputHistory}${UseStnINCAR}"
if (( "$UseStnINCAR" >= 1 && "$UseStnINCAR" <= 4 )); then
MyINCAR="${StandardInputFileDirectory}/INCAR_Standard"
elif [ "$UseStnINCAR" == 0 ]; then
echo -n "Specify a path to an INCAR: "
read -e MyINCAR
UserInputHistory="${UserInputHistory}${MyINCAR}\n"
if [ ! -f "$MyINCAR" ]; then
echo "$MyINCAR does not exist. Script terminating."
exit 1
fi
else
echo $BadInput
exit 1
fi
cp $MyINCAR ./INCAR
sed -i "s/^SYSTEM .*/SYSTEM = ${Title}/" INCAR
if [ "$UseStnINCAR" == 4 ]; then
echo -n "Provide the electric field along the slab axis in eV/Å: "
read EFIELD
UserInputHistory="${UserInputHistory}${EFIELD}\n"
sed -i "s/.*EFIELD_HERE.*/EFIELD = $EFIELD/" INCAR
sed -i "s/.*IDIPOL_HERE.*/IDIPOL = 3/" INCAR
sed -i "s/^LDIPOL.*/LDIPOL = \.TRUE\./" INCAR
sed -i "s/^LAECHG.*/LAECHG = \.TRUE\./" INCAR
FinalReminder="${FinalReminder}> You have applied a uniform electric field along lattice vector #3 (aka the z-axis).\nThis selection is only valid if this system is a slab normal to the z axis.\nIf this does apply, you need to change IDIPOL manually.\n"
elif [ "$UseStnINCAR" == 3 ]; then
sed -i "s/^LAECHG.*/LAECHG = \.TRUE\./" INCAR
elif [ "$UseStnINCAR" == 2 ]; then
sed -i "s/^NSW.*/NSW = 0/" INCAR
fi
if (( "$UseStnINCAR" >= 1 && "$UseStnINCAR" <= 4 )); then
echo -n "Keep the precision 'Normal' rather than increasing to 'Accurate'? (1/0) "
read -n 1 NormalAccuracy ; echo ""
UserInputHistory="${UserInputHistory}${NormalAccuracy}"
if [ "$NormalAccuracy" == 0 ]; then
sed -i "s/^PREC.*/PREC = Accurate/" INCAR
echo "Updating INCAR for higher precision."
elif [ "$NormalAccuracy" != 1 ]; then
echo $BadInput
exit 1
fi
echo -n "Keep the normal step size (POTIM=1) and do not reduce step size (to POTIM=0.2)? (1/0) "
read -n 1 NormalStepSize; echo ""
UserInputHistory="${UserInputHistory}${NormalStepSize}"
if [ "$NormalStepSize" == 0 ]; then
sed -i "s/^POTIM.*/POTIM = 0\.2/" INCAR
echo "Updating INCAR for smaller step size."
elif [ "$NormalStepSize" != 1 ]; then
echo $BadInput
exit 1
fi
fi
fi
sed -i "1 s/^.*$/${Title}/" INCAR
# KPOINTS section
if [ -f "KPOINTS" ]; then
echo -n "Do you want to use the KPOINTS already in ${MyDir}? (1/0) "
read -n 1 UseThisKPOINTS ; echo ""
UserInputHistory="${UserInputHistory}${UseThisKPOINTS}"
if [ "$UseThisKPOINTS" == 1 ]; then
MyKPOINTS="KPOINTS"
elif [ "$UseThisKPOINTS" != 0 ]; then
echo $BadInput
exit 1
fi
else
UseThisKPOINTS=0
fi
if [ "$UseThisKPOINTS" == 0 ]; then
echo -n "Do you want to use the standard KPOINTS format? (1/0) "
read -n 1 UseStnKPOINTS ; echo ""
UserInputHistory="${UserInputHistory}${UseStnKPOINTS}"
if [ "$UseStnKPOINTS" == 1 ]; then
MyKPOINTS="${StandardInputFileDirectory}/KPOINTS_Standard"
elif [ "$UseStnKPOINTS" == 0 ]; then
echo -n "Specify a path to a KPOINTS: "
read -e MyKPOINTS
UserInputHistory="${UserInputHistory}${MyKPOINTS}\n"
if [ ! -f "$MyKPOINTS" ]; then
echo "$MyKPOINTS does not exist. Script terminating."
exit 1
fi
else
echo $BadInput
exit 1
fi
cp $MyKPOINTS ./KPOINTS
fi
if grep -q "KA" KPOINTS; then
# The number of kpoints recommended will be chosen such that this number * the lattice constant ~= stand_prod (standard product)
stand_prod=39.0
read -r scale_factor ax ay az bx by bz cx cy cz <<< $(echo `sed '2,5!d' POSCAR | grep -Eo '[0-9\.]+'`)
a=`echo "$scale_factor * sqrt($ax^2+$ay^2+$az^2)" | bc`
b=`echo "$scale_factor * sqrt($bx^2+$by^2+$bz^2)" | bc`
c=`echo "$scale_factor * sqrt($cx^2+$cy^2+$cz^2)" | bc`
# unrounded calculation first
ka0=$(echo "scale=3; $stand_prod/$a" | bc)
kb0=$(echo "scale=3; $stand_prod/$b" | bc)
kc0=$(echo "scale=3; $stand_prod/$c" | bc)
# round with preference towards rounding up (to be safe)
roundingFactor=0.9 # exceed stand_prod in approx this fraction of the domain
ka=$(echo "scale=0; ($ka0+$roundingFactor)/1" | bc)
kb=$(echo "scale=0; ($kb0+$roundingFactor)/1" | bc)
kc=$(echo "scale=0; ($kc0+$roundingFactor)/1" | bc)
ka=$((ka>1 ? ka : 1))
kb=$((kb>1 ? kb : 1))
kc=$((kc>1 ? kc : 1))
echo "You must select number of k-points in each direction."
echo "The following are the recommended number of k-points based on the POSCAR and the goal of L_i*n_k_i = ${stand_prod}:"
echo "$ka $kb $kc"
echo -n "Should we use these recommendations? (1/0) "
read -n 1 UseRecKPOINTS ; echo ""
UserInputHistory="${UserInputHistory}${UseRecKPOINTS}"
if [ "$UseRecKPOINTS" == 0 ]; then
echo -n "Specify n_k_a: "
read ka
UserInputHistory="${UserInputHistory}${ka}\n"
echo -n "Specify n_k_b: "
read kb
UserInputHistory="${UserInputHistory}${kb}\n"
echo -n "Specify n_k_c: "
read kc
UserInputHistory="${UserInputHistory}${kc}\n"
if ! [[ $ka =~ $any_int && $kb =~ $any_int && $kc =~ $any_int ]]; then
echo "I do not recognize your input as numbers. Script terminating."
exit 1
fi
FinalReminder="${FinalReminder}> You manually selected your number(s) of k-points so you may need to edit your lattice vectors.\nE.g. you may want to lengthen c if you have a slab and you chose a smaller than recommended n_k_c.\n"
elif [ "$UseRecKPOINTS" != 1 ]; then
echo $BadInput
exit 1
fi
sed -i "s/KA/$ka/g" KPOINTS
sed -i "s/KB/$kb/g" KPOINTS
sed -i "s/KC/$kc/g" KPOINTS
fi
# submission.slurm section
if [ -f "submission.slurm" ]; then
echo -n "Do you want to use the submission.slurm already in ${MyDir}? (1/0) "
read -n 1 UseThisSS ; echo ""
UserInputHistory="${UserInputHistory}${UseThisSS}"
if [ "$UseThisSS" == 1 ]; then
MySS="submission.slurm"
elif [ "$UseThisSS" != 0 ]; then
echo $BadInput
exit 1
fi
else
UseThisSS=0
fi
if [ "$UseThisSS" == 0 ]; then
echo "Select a standard submission.slurm or non-standard submission.slurm format."
# echo -n "(3:engineering_long / 2:engineering_q / 1:mendoza_q / 0:non-standard format) "
echo -n "(2: general-long / 1:mendoza_q / 0:non-standard format) "
# echo -n "(1:general-long / 0:non-standard format) "
read -n 1 UseSSFormat ; echo ""
UserInputHistory="${UserInputHistory}${UseSSFormat}"
# if [ "$UseSSFormat" == 3 ]; then
# MySS="${StandardInputFileDirectory}/submission.slurm_engineering_long"
if [ "$UseSSFormat" == 2 ]; then
MySS="${StandardInputFileDirectory}/submission.slurm_general_long"
elif [ "$UseSSFormat" == 1 ]; then
MySS="${StandardInputFileDirectory}/submission.slurm_mendoza_q"
elif [ "$UseSSFormat" == 0 ]; then
echo -n "Specify a path to a submission.slurm: "
read -e MySS
UserInputHistory="${UserInputHistory}${MySS}\n"
if [ ! -f "$MySS" ]; then
echo "$MySS does not exist. Script terminating."
exit 1
fi
else
echo $BadInput
exit 1
fi
cp $MySS ./submission.slurm
fi
# The first line below should make the necessary replacement if using the standard submission file
# The following two will be necessary only if using a submission.slurm taken from elsewhere
sed -i "s/TITLE_HERE/${Title}/" submission.slurm
sed -i "s/^#SBATCH --job-name.*/#SBATCH --job-name ${Title}/" submission.slurm
sed -i "s/^export JOB=.*/export JOB=${Title}/" submission.slurm
echo "These are your submission.slurm settings:"
grep "#SBATCH -n " submission.slurm
grep "#SBATCH -N " submission.slurm
grep "#SBATCH --time" submission.slurm
echo -n "Should we use those settings? (1/0) "
read -n 1 UseSSsettings; echo ""
UserInputHistory="${UserInputHistory}${UseSSsettings}"
if [ "$UseSSsettings" == 0 ]; then
echo -n "Specify n_cpus: "
read n_cpus
UserInputHistory="${UserInputHistory}${n_cpus}\n"
if ! [[ $n_cpus =~ $any_int ]]; then
echo "I do not recognize your input as a number. Script terminating."
exit 1
else
read -r previous_n_cpus <<< $(echo `grep '#SBATCH -n' submission.slurm | grep -Eo '[0-9\.]+'`)
sed -i "s/-n ${previous_n_cpus}/-n ${n_cpus}/g" submission.slurm
fi
echo -n "Specify N_nodes: "
read N_nodes
UserInputHistory="${UserInputHistory}${N_nodes}\n"
if ! [[ $N_nodes =~ $any_int_range ]]; then
echo "Removing the node constraint."
sed -i "/#SBATCH -N/d" submission.slurm
else
read -r previous_N_nodes <<< $( grep '#SBATCH *-N' submission.slurm | grep -Eo '[0-9\.]+\-*[0-9\.]+' ) #echo `grep '#SBATCH -N' submission.slurm | grep -Eo '[0-9\.]+'`)
sed -i "s/-N ${previous_N_nodes}/-N ${N_nodes}/g" submission.slurm
fi
echo -n "Specify time (n_days-n_hours): "
read -n 1 n_days
echo -n "-"
read -n 2 n_hours
echo ":00:00"
UserInputHistory="${UserInputHistory}${n_days}${n_hours}"
if ! [[ $n_days =~ $any_int_range ]] || ! [[ $n_hours =~ $any_int_range ]] ; then
echo "I do not recognize your input as a number. Script terminating."
exit 1
else
read -r previous_time <<< $(echo "$(grep '#SBATCH --time' submission.slurm )" )
sed -i "s/^$previous_time/#SBATCH --time=${n_days}\-${n_hours}:00:00/g" submission.slurm
fi
fi
echo -e $FinalReminder
echo -n "Do you want to submit the job? (1/0) "
read -n 1 SubmitJob ; echo ""
UserInputHistory="${UserInputHistory}${SubmitJob}"
echo $UserInputHistory > mkdirVASPuserInput
if [ "$SubmitJob" == 1 ]; then
echo "Submitting $Title"
sbatch submission.slurm
elif [ "$SubmitJob" != 0 ]; then
echo $BadInput
exit 1
fi
This script will generate a POTCAR file if given a list of elements to use
#!/bin/bash
MyDirectory="/mnt/home/djokicma/VASP/References/POTCARs"
MyCommand="cat "
MyAtoms=""
for atom in "$@"
do
MyFile="$MyDirectory"/POTCAR_"$atom"
if [ ! -f "$MyFile" ]; then
echo cannot find $MyFile
exit 1
fi
MyAtoms+="$atom"
MyCommand+="$MyFile "
done
MyCommand+="> POTCAR_$MyAtoms"
echo "$MyCommand"
eval "$MyCommand"
echo "POTCAR_$MyAtoms successfully created"
This script reads a POSCAR to automatically determine which elements to feed into script 2
#!/bin/bash
if [ "$1" != "" ]; then
MyFile="${1}"
else
MyFile="./POSCAR"
fi
# The command `sed "6q;d" file` will print the 6th line of file
# The command `echo string | sed 's/ //g'` will return the string with no spaces
MyAtoms=`sed "6q;d" "$MyFile" | sed -z 's/\r//g;s/\n//g'`
MyAtomsNoSpace=`echo -n "$MyAtoms" | sed -z 's/ //g'`
echo Attempting to make POTCAR_"$MyAtomsNoSpace"
MyCommand="~/bin/POTCAR_create "
MyCommand+="${MyAtoms}"
eval "$MyCommand"
To perform band structure calculations, there are several ways:
1. to extract the eigenvalues to plotting software (such as gnuplot),
2. to run p4vasp, or
3. to interface with Wannier90 and produce wannier projection.
Preparing the band structures are simple. First, take your output and run an SCF cycle. Then, take the CHGCAR (charge density file) and rerun the calculation with a NSCF run (by setting ICHARG to 11 in the INCAR). Also note, you must set ISMEAR = 0 if you have not already. Otherwise you get a weird and un-intuitive error. Applying symmetry may also cause issues. To be safe, set ISYM = 0 for the NSCF cycle.
You will also need to setup a k-point pathway in the KPOINTS file in order to calculate eigenvalues. Such a file will look as follows:
<Title and general info>
<number> ! number of k-points between each point specified
Line-mode !or 'line' tells VASP which setting to use
reciprocal !or 'cart'
<point 1>
.
.
.
<point n>
So far this prints the final minimized energy from OUTCAR. I will update it with more functionality in the near future
#!/bin/bash
# My first script
if [ "$1" != "" ]; then
MyFile=$1
else
MyFile="OUTCAR"
fi
OptimizeBool=$(grep -n "reached required accuracy - stopping structural energy minimisation" $MyFile)
if [ "$OptimizeBool" == "" ]; then
echo -e "Energy not minimized. \nFinal lines:"
tail $MyFile
else
echo -e "$(grep -A 2 -n "FREE ENERGIE OF THE ION-ELECTRON SYSTEM (eV)" $MyFile | tail -3)"
fi
GitHub: https://github.com/henniggroup/VASPsol
Usage: https://github.com/henniggroup/VASPsol/blob/master/docs/USAGE.md
Example input: https://github.com/henniggroup/VASPsol/tree/master/examples/PbS_100/Solvation
2021 JACS Paper that used VASPsol: https://pubs.acs.org/doi/pdf/10.1021/jacs.1c07934
Supporting material with details about VASPsol: https://docs.google.com/viewer?url=https%3A%2F%2Fpubs.acs.org%2Fdoi%2Fsuppl%2F10.1021%2Fjacs.1c07934%2Fsuppl_file%2Fja1c07934_si_001.pdf
If you want to submit to general, but not incur cpu hours, as what happens when they don't go through the buyin, they can try submitting with the following options in their submit scripts:
#SBATCH -A mendoza_q
#SBATCH --exclude=amr-163,amr-178,amr-179
this should submit to your queue, exclude your buyin nodes and flip the jobs to run in the general partition, avoiding incurring cpu.
Note: The top of the hill structure in the NEB calculations in VASP can be export it to CRYSTAL for TS refinement
The full manual for NEB in VASP and TS search in Crystal is here:
An example of this approach of NEBinVASP vs TS in CRYSTAL can be found in the SI (page S24) of our Nature Materials paper:
Reaction-driven restructuring of defective PtSe2 into ultrastable catalyst for the oxygen reduction reaction (2024-10-07)
https://doi.org/10.1038/s41563-024-02020-w
Roughtly speaking:
1. https://theory.cm.utexas.edu/vtsttools/scripts.html
2. https://theory.cm.utexas.edu/vtsttools/neb.html
3. I used nebmake.pl script to generate the POSCAR files including the TS and I follow the command:
./ nebmake.pl (POSCAR1) (POSCAR2) (number of images, NI)
It generates the below folders with the guess geometry of the TS:
output: directories [00,NI+1] containing initial NEB POSCAR files
After getting the guess POSCAR TS file, I have optimized the same in VASP and CRYSTAL17. Then, I got the reaction barrier.
This is the way to run in our cluster:
The installation location is at
/mnt/research/mendozacortes_group/VASP-6.2.1-intel-2020b-VTST
You should be able to access vasp executable at
/mnt/research/mendozacortes_group/VASP-6.2.1-intel-2020b-VTST/bin
The VTST scripts are stored at
/mnt/research/mendozacortes_group/VASP-6.2.1-intel-2020b-VTST/vtstscripts-1033
Once you have the required input files. The next thing is to prepare a submission script that will indicate the cluster or the supercomputer how to allocate the computer resources for your job to run.
#!/bin/sh
#SBATCH -J pHGF_Li3Stage1
#SBATCH -N 1
#SBATCH -t 48:00:00
#SBATCH -n 16
#SBATCH -p mendoza_q
##SBATCH --output=job.out
export JOB=pHGF_Li3Stage1
###The lines below requests the latest version of vasp (vasp.5.4.4). To use the version 5.4.1 refer to the script given below
module purge
#module load gnu
#module load gnu-openmpi
module load vasp
which vasp_std
mpirun -np 16 vasp_std > vasp_new.out
#!/bin/sh
#SBATCH -J test
#SBATCH -N 1
#SBATCH -t 48:00:00
#SBATCH -n 16
#SBATCH -p mendoza_q
#SBATCH --output=job.out
export JOB=test
module purge
module load intel-openmpi/2.1.0
module load vasp/5.4.1
mpirun -np 16 /gpfs/research/software/hpc/intel/openmpi/vasp/vasp.5.4.1/vasp_std > vasp.out
#!/bin/bash
#SBATCH -J NEB_Libattery
#SBATCH -o NEB_Libattery
#SBATCH -N 1
#SBATCH --ntasks-per-node=16
#SBATCH -p mendoza_q
#SBATCH -t 24:00:00
#SBATCH --mail-type=ALL
export JOB=NEB_Libattery
export DIR=$SLURM_SUBMIT_DIR
export scratch=/panfs/storage.local/engineering/mendozagroup/scratch/vasp/${USER}
mkdir -p $scratch/$JOB
cp -r $DIR/* $scratch/$JOB/
cd $scratch/$JOB
module load intel-openmpi
mpirun -np 16 ${HOME}/bin/vasp-test >& vasp.out
#If you want to copy back to your home directory, you should only copy back CONTCAR
#preferibly do this manual or uncomment this
# cp [0-9][0-9]/CONTCAR ${DIR}/[0-9][0-9]/CONTCAR
#PBS -l nodes=1:ppn=8,walltime=24
#PBS -N vasp
#PBS -S /bin/tcsh
source /share/apps/intel/composer_xe_2013_sp1.3.174/composer_xe_2013_sp1.3.174/bin/compilervars.csh intel64
setenv OPAL_PREFIX /project/source/openmpi/build-1.8.3-ion
setenv PATH ${OPAL_PREFIX}/bin:${PATH}
setenv LD_LIBRARY_PATH ${OPAL_PREFIX}/lib:$LD_LIBRARY_PATH
set MPI=mpirun
set nprocs=`wc -l < $PBS_NODEFILE`
set VASP=/project/source/VASP/vasp.5.3.5/bin/vasp.ion
cd $PBS_O_WORKDIR
$MPI -np $nprocs -machinefile $PBS_NODEFILE $VASP > LOG
#PBS -l nodes=1:ppn=4,walltime=240:00:00
#PBS -N VASP
#PBS -S /bin/tcsh
source /ul/haixiao/Intel/composer_xe_2013/bin/compilervars.csh intel64
setenv OPAL_PREFIX /ul/haixiao/OpenMPI/build_ion
setenv PATH ${OPAL_PREFIX}/bin:${PATH}
setenv LD_LIBRARY_PATH ${OPAL_PREFIX}/lib:$LD_LIBRARY_PATH
set VASP=/ul/haixiao/VASP/bin/vasp.ion
set MPI=${OPAL_PREFIX}/bin/mpirun
cd $PBS_O_WORKDIR
echo $PBS_JOBID > jobid
cat $PBS_NODEFILE > nodelist
# Determine the number of processors we were given
set nprocs = `wc -l < $PBS_NODEFILE`
$MPI -np $nprocs -machinefile $PBS_NODEFILE $VASP > LOG
Save the submission script, e.g. as PBS_script.sh
To submit to the queue in calt, just do
> qsub PBS_script.sh
To check the submitted jobs use:
> qstat | grep jmendoza
#PBS -l nodes=1:ppn=8,walltime=24
#PBS -N vasp
#PBS -S /bin/tcsh
source /share/apps/intel/composer_xe_2013_sp1.3.174/composer_xe_2013_sp1.3.174/bin/compilervars.csh intel64
setenv OPAL_PREFIX /project/source/openmpi/build-1.8.3-ion
setenv PATH ${OPAL_PREFIX}/bin:${PATH}
setenv LD_LIBRARY_PATH ${OPAL_PREFIX}/lib:$LD_LIBRARY_PATH
set MPI=mpirun
set nprocs=`wc -l < $PBS_NODEFILE`
set VASP=/project/source/VASP/vasp.5.3.5/bin/vasp.ion
cd $PBS_O_WORKDIR
$MPI -np $nprocs -machinefile $PBS_NODEFILE $VASP > LOG
#PBS -l nodes=node-4-1:ppn=8,walltime=12:00:00
#PBS -N VASP
#PBS -S /bin/tcsh
module load cuda/8.0
source /ul/haixiao/GCC/gcc-4.9.3-ion.csh
source /ul/haixiao/intel/bin/compilervars.csh intel64
set MPI=mpirun
set VASP=/net/hulk/home1/haixiao/VASP/vasp.5.4.4/vasp.5.4.4/bin/vasp_gpu
cd $PBS_O_WORKDIR
echo $PBS_JOBID > jobid
# Determine the number of processors we were given
set nprocs = `wc -l < $PBS_NODEFILE`
$MPI -np $nprocs -machinefile $PBS_NODEFILE $VASP > LOG
#PBS -l nodes=1:ppn=8:gpus=2,walltime=12:00:00
#PBS -N VASP
#PBS -S /bin/tcsh
module load cuda/8.0
source /ul/haixiao/GCC/gcc-4.9.3-ion.csh
source /ul/haixiao/intel/bin/compilervars.csh intel64
set MPI=mpirun
set VASP=/net/hulk/home1/haixiao/VASP/vasp.5.4.4/vasp.5.4.4/bin/vasp_gpu
cd $PBS_O_WORKDIR
echo $PBS_JOBID > jobid
# Determine the number of processors we were given
set nprocs = `wc -l < $PBS_NODEFILE`
$MPI -np $nprocs -machinefile $PBS_NODEFILE $VASP > LOG
Submission script for Cori, Edison (NERSC)
More info about VASP in NERSC: http://www.nersc.gov/users/software/applications/materials-science/vasp/
To check which partition is the best for each job (usually regular accomodates most of the jobs): http://www.nersc.gov/users/computational-systems/edison/running-jobs/queues-and-policies/
Sample batch script to run VASP on Edison
#!/bin/bash -l
#SBATCH -J test_vasp
#SBATCH -A jcap
#SBATCH -p regular
#SBATCH -N 2
#SBATCH -t 23:00:00
module load vasp
srun -n 48 vasp_std
#!/bin/bash -l
#SBATCH -J test_vasp
#SBATCH -A jcap
#SBATCH -p regular
#SBATCH -N 2
#SBATCH -t 23:00:00
module load vasp
srun -n 64 vasp_std
The potentials are in the directory: /panfs/storage.local/engineering/mendozagroup/vasp/Potential
"I have installed VASP with new patches. I used intel openmpi compiler this time."
For your information, the paths are,
/panfs/storage.local/opt/hpc/intel/openmpi/vasp/gam/vasp
/panfs/storage.local/opt/hpc/intel/openmpi/vasp/ncl/vasp
/panfs/storage.local/opt/hpc/intel/openmpi/vasp/std/vasp
patch < patch.5.4.1.03082016
Following is the makefile.include I used.
# Precompiler options
CPP_OPTIONS= -DMPI -DHOST=\"IFC91_ompi\" -DIFC \
-DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \
-DMPI_BLOCK=8000 -DscaLAPACK -Duse_collective \
-DnoAugXCmeta -Duse_bse_te \
-Duse_shmem -Dtbdyn
CPP = fpp -f_com=no -free -w0 --freestanding $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
FC = mpif90
FCL = mpif90 -mkl
FREE = -free -names lowercase
FFLAGS = -assume byterecl
OFLAG = -O2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
MKL_PATH = $(MKLROOT)/lib/intel64
BLAS =
LAPACK =
BLACS = -lmkl_blacs_openmpi_lp64
# BLACS = libmkl_blacs_openmpi_lp64.a
SCALAPACK = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o \
$(MKLROOT)/interfaces/fftw3xf/libfftw3xf_intel.a
INCS =-I$(MKLROOT)/include/fftw
LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS)
OBJECTS_O1 += fft3dfurth.o fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB= linpack_double.o getshmem.o
# Normally no need to change this
SRCDIR = ../../src
BINDIR = ../../bin