There might be issues on some of the developer nodes, where the Pcrystal executable is not found. It is possible to fix these issues by hard-coding the executable path in your submission script.
This issue will result in error messages that look like this:
HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:151): execvp error on file Pcrystal (No such file or directory)
It can be fixed by replacing this line mpirun -n $SLURM_NTASKS Pcrystal 2>&1 >& $DIR/${JOB}.out in your submission script for the following:
mpirun -n $SLURM_NTASKS /opt/software-current/2023.06/x86_64/intel/skylake_avx512/software/CRYSTAL/23-intel-2023a/bin/Pcrystal 2>&1 >& $DIR/${JOB}.out
#SBATCH -A mendoza_q
#SBATCH --exclude=amr-163,amr-178,amr-179
this should submit to our queue, exclude your buyin nodes and flip the jobs to run in the general partition, avoiding incurring cpu.
Usage: Your input file should be called <filename>.d12 and be in the same directory as the submission script. The script is submitted by running the command sbatch <submission_script_name>
Tips:
The main parameters you will be changing are the job name (note that it is specified twice in the script), run time (currently set to 30 minutes), nodes/cores (currently 2 nodes, 8 cores per node), memory per cpu, and account (general or mendoza_q). These will depend on the type of job you are running and the amount of resources you would like to allocate.
There is one #SBATCH line commented out, this is to submit to scavenger queue instead of general or mendoza_q.
There are two cp lines commented out, these might come in handy to restart calculations.
Submission script for d12s (optimizations, frequency calculations, single point calculations, etc.)
#!/bin/bash
#SBATCH -J mof_5_mpirun
#SBATCH -o mof_5_mpirun-%J.o
#SBATCH --ntasks=4
#SBATCH -N 2
#SBATCH -t 0-04:00:00
#SBATCH --mem-per-cpu=3G
module purge
module load CRYSTAL/23-intel-2023a
export JOB=mof_5_mpirun
export DIR=$SLURM_SUBMIT_DIR
export scratch=/mnt/gs21/scratch/$USER/crys23
rm -fr $scratch/$JOB #NOTE: this line erases any old scratch directories with the same name
mkdir $scratch/$JOB/
cp $DIR/$JOB.d12 $scratch/$JOB/INPUT
cd $scratch/$JOB
mpirun -n $SLURM_NTASKS Pcrystal 2>&1 >& $DIR/${JOB}.out
cp fort.9 ${DIR}/${JOB}.f9
To submit a job you have to have to be in a Development Node (e.g. dev-intel14-k20, dev-intel16)
>> ssh dev-intel16
Once you are in a development node, this command is for submitting the .slurm file you just create
>> sbatch submit.slurm
These command lines are for showing job information and resource usages.
>> scontrol show job $SLURM_JOB_ID
>> js -j $SLURM_JOB_ID
Besides the memory and the wall time request, you can also change the number of tasks and number of nodes in the first and second #SBATCH lines, respectively. If you want to run on the mendoza partition, change general to mendoza_q. Make sure to change the job name according to your input file, i.e. replace both instances of "CRYSTAL_parallel" for the name of the .d12 file you want to run.
E.g. if you use the example below for graphene, you would substitute "CRYSTAL_parallel" with "graphene."
Submission script for d3s (properties calculations BANDS, DOS, etc.)
#!/bin/bash
#SBATCH -J mof_5_mpirun_DOSS
#SBATCH -o mof_5_mpirun_DOSS-%J.o
#SBATCH --ntasks=2
#SBATCH -N 1
#SBATCH -t 0-01:00:00
#SBATCH --mem-per-cpu=3G
module purge
module load CRYSTAL/23-intel-2023a
export JOB=mof_5_mpirun_DOSS
export DIR=$SLURM_SUBMIT_DIR
export scratch=/mnt/gs21/scratch/$USER/crys23
rm -fr $scratch/$JOB #NOTE: this line erases any old scratch directories with the same name
mkdir $scratch/$JOB/
cp $DIR/$JOB.d3 $scratch/$JOB/INPUT
cp $DIR/$JOB.f9 $scratch/$JOB/fort.9
cd $scratch/$JOB
mpirun -n $SLURM_NTASKS Pproperties 2>&1 >& $DIR/${JOB}.out
cp BAND.DAT ${DIR}/${JOB}.BAND.dat
cp DOSS.DAT ${DIR}/${JOB}.DOSS.dat
cp POTC.DAT ${DIR}/${JOB}.POTC.dat
cp fort.25 ${DIR}/${JOB}.f25
Note: The top of the hill structure in the NEB calculations in VASP can be export it to CRYSTAL for TS refinement
The full manual for NEB in VASP and TS search in Crystal is here:
An example of this approach of NEBinVASP vs TS in CRYSTAL can be found in the SI (page S24) of our Nature Materials paper:
Reaction-driven restructuring of defective PtSe2 into ultrastable catalyst for the oxygen reduction reaction (2024-10-07)
https://doi.org/10.1038/s41563-024-02020-w
Roughtly speaking:
1. https://theory.cm.utexas.edu/vtsttools/scripts.html
2. https://theory.cm.utexas.edu/vtsttools/neb.html
3. I used nebmake.pl script to generate the POSCAR files including the TS and I follow the command:
./ nebmake.pl (POSCAR1) (POSCAR2) (number of images, NI)
It generates the below folders with the guess geometry of the TS:
output: directories [00,NI+1] containing initial NEB POSCAR files
After getting the guess POSCAR TS file, I have optimized the same in VASP and CRYSTAL17. Then, I got the reaction barrier.
This is the way to run in our cluster:
The installation location is at
/mnt/research/mendozacortes_group/VASP-6.2.1-intel-2020b-VTST
You should be able to access vasp executable at
/mnt/research/mendozacortes_group/VASP-6.2.1-intel-2020b-VTST/bin
The VTST scripts are stored at
/mnt/research/mendozacortes_group/VASP-6.2.1-intel-2020b-VTST/vtstscripts-1033
Usage: Your input file should be called <filename>.d12 and be in the same directory as the submission script. The script is submitted by running the command sbatch <submission_script_name>
Tips:
The main parameters you will be changing are the job name (note that it is specified twice in the script), run time (currently set to 30 minutes), nodes/cores (currently 2 nodes, 8 cores per node), memory per cpu, and account (general or mendoza_q). These will depend on the type of job you are running and the amount of resources you would like to allocate.
There is one #SBATCH line commented out, this is to submit to scavenger queue instead of general or mendoza_q.
There are two cp lines commented out, these might come in handy to restart calculations.
#!/bin/bash --login
#SBATCH --job-name=test
#SBATCH --account=general
#SBATCH --time=0-00:30:00
##SBATCH --qos=scavenger
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --mem-per-cpu=3G
#SBATCH -C [intel16|intel18]
module purge
module load CRYSTAL/23
UCX_TLS=ud,sm,self
export JOB=test
export DIR=$SLURM_SUBMIT_DIR
export scratch=/mnt/gs21/scratch/$USER/crys23
rm -r $scratch/$JOB
mkdir $scratch/$JOB/
#cp $DIR/fort.9 $scratch/$JOB/fort.20
#cp $DIR/OPTINFO.DAT $scratch/$JOB/OPTINFO.DAT
cp $DIR/$JOB.d12 $scratch/$JOB/INPUT
cd $scratch/$JOB
unset FORT_BUFFERED
export I_MPI_ADJUST_BCAST=3
export I_MPI_DEBUG=5
ulimit -s unlimited
export OMP_NUM_THREADS=1
export LD_LIBRARY_PATH=/opt/software/UCX/1.12.1-GCCcore-11.3.0/lib64:$LD_LIBRARY_PATH
mpiexec -n $SLURM_NTASKS -genv UCX_TLS ud,sm,self /opt/software/CRYSTAL/23/bin/Pcrystal > $DIR/${JOB}.out 2>&1 $DIR/${JOB}.out
cp fort.9 ${DIR}/${JOB}.f9
If you want to submit to general, but not incur cpu hours, as what happens when they don't go through the buyin, they can try submitting with the following options in their submit scripts:
#SBATCH -A mendoza_q
#SBATCH --exclude=amr-163,amr-178,amr-179
this should submit to your queue, exclude your buyin nodes and flip the jobs to run in the general partition, avoiding incurring cpu.
Usage: Your input file should be called <filename>.d12 and be in the same directory as the submission script. The script is submitted by running the command sbatch <submission_script_name>
Tips:
The main parameters you will be changing are the job name (note that it is specified twice in the script), run time (currently set to 30 minutes), nodes/cores (currently 2 nodes, 8 cores per node), memory per cpu, and account (general or mendoza_q). These will depend on the type of job you are running and the amount of resources you would like to allocate.
There is one #SBATCH line commented out, this is to submit to scavenger queue instead of general or mendoza_q.
There are two cp lines commented out, these might come in handy to restart calculations.
#!/bin/bash --login
#SBATCH --job-name=test
#SBATCH --account=general
#SBATCH --time=0-00:30:00
##SBATCH --qos=scavenger
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
#SBATCH --mem-per-cpu=3G
#SBATCH -C [intel16|intel18]
module purge
module load CRYSTAL/23
UCX_TLS=ud,sm,self
export JOB=test
export DIR=$SLURM_SUBMIT_DIR
export scratch=/mnt/gs21/scratch/$USER/crys23
rm -r $scratch/$JOB
mkdir $scratch/$JOB/
#cp $DIR/fort.9 $scratch/$JOB/fort.20
#cp $DIR/OPTINFO.DAT $scratch/$JOB/OPTINFO.DAT
cp $DIR/$JOB.d12 $scratch/$JOB/INPUT
cd $scratch/$JOB
unset FORT_BUFFERED
export I_MPI_ADJUST_BCAST=3
export I_MPI_DEBUG=5
ulimit -s unlimited
export OMP_NUM_THREADS=1
export LD_LIBRARY_PATH=/opt/software/UCX/1.12.1-GCCcore-11.3.0/lib64:$LD_LIBRARY_PATH
mpiexec -n $SLURM_NTASKS -genv UCX_TLS ud,sm,self /opt/software/CRYSTAL/23/bin/Pcrystal > $DIR/${JOB}.out 2>&1 $DIR/${JOB}.out
cp fort.9 ${DIR}/${JOB}.f9
If you want to submit to general, but not incur cpu hours, as what happens when they don't go through the buyin, they can try submitting with the following options in their submit scripts:
#SBATCH -A mendoza_q
#SBATCH --exclude=amr-163,amr-178,amr-179
this should submit to your queue, exclude your buyin nodes and flip the jobs to run in the general partition, avoiding incurring cpu.
Usage: Your input files should be called <filename>.d3 and <filename>.f9, being in the same directory as the submission script. The script is submitted by running the command sbatch <submission_script_name>
#!/bin/bash
#SBATCH -J <filename>
#SBATCH -o <filename>-%J.o
#SBATCH --ntasks=12
#SBATCH -A mendoza_q
#SBATCH -N 1
#SBATCH -t 4-00:00:00
#SBATCH --mem-per-cpu=3G
module purge
module load CRYSTAL/23-intel-2023a
UCX_TLS=ud,sm,self
export JOB=<filename>
export DIR=$SLURM_SUBMIT_DIR
export scratch=/mnt/gs21/scratch/$USER/crys23
rm -fr $scratch/$JOB
mkdir $scratch/$JOB/
cp $DIR/$JOB.d3 $scratch/$JOB/INPUT
cp $DIR/$JOB.f9 $scratch/$JOB/fort.9
cd $scratch/$JOB
unset FORT_BUFFERED
export I_MPI_ADJUST_BCAST=3
#export I_MPI_DEBUG=5
ulimit -s unlimited
export OMP_NUM_THREADS=1
#mpirun -n $SLURM_NTASKS Pproperties 2>&1 >& $DIR/${JOB}.out
export LD_LIBRARY_PATH=/opt/software-current/arm64/UCX/1.12.1-GCCcore-12.2.0/lib64:$LD_LIBRARY_PATH
mpiexec -n $SLURM_NTASKS -genv UCX_TLS ud,sm,self /opt/software-current/2023.06/x86_64/intel/skylake_avx512/software/CRYSTAL/23-intel-2023a/bin/Pproperties > $DIR/${JOB}.out 2>&1 $DIR/${JOB}.out
cp POT_CUBE.DAT ${DIR}/${JOB}_POT_CUBE.cube
cp DENS_CUBE.DAT ${DIR}/${JOB}_DENS_CUBE.cube
cp SPIN_CUBE.DAT ${DIR}/${JOB}_SPIN_CUBE.cube
#cp BAND.DAT ${DIR}/${JOB}.BAND.dat
#cp DOSS.DAT ${DIR}/${JOB}.DOSS.dat
#cp POTC.DAT ${DIR}/${JOB}.POTC.dat
#cp fort.25 ${DIR}/${JOB}.f25
ECH3
100
SCALE
3
POT3
100
5
SCALE
3
END
After successful calculations, you obtain three main outputs, <filename>_POT_CUBE.cube, <filename>_DENS_CUBE.cube, and <filename>_SPIN_CUBE.cube, which are the outputs for electric potential, charge density, and spin density, respectively. You can directly grab these outputs into VESTA, which enable you to modify visualization such as isosurface levels.
Basis sets for crystal can be found on the crystal website
https://www.crystal.unito.it/basis_sets.html
Additional Basis sets for use in spin-orbit coupling calculations can be found for COLUSC (Columbus Small Core) keyword which currently must be adapted by hand (5/25/23) but will be available in the future on the crystal website.
https://lin-web.clarkson.edu/~pchristi/reps.html
The basis sets for the STUTSC (Stuttgart Small Core) , STUTLC (Stuttgart Small core) and STUTSH (Stuttgart Super Heavy Element) can be found here:
https://www.tc.uni-koeln.de/PP/clickpse.en.html