There might be issues on some of the developer nodes, where the Pcrystal executable is not found. It is possible to fix these issues by hard-coding the executable path in your submission script.
This issue will result in error messages that look like this:
HYD_spawn (../../../../../src/pm/i_hydra/libhydra/spawn/intel/hydra_spawn.c:151): execvp error on file Pcrystal (No such file or directory)
It can be fixed by replacing this line srun Pcrystal 2>&1 >& $DIR/${JOB}.out in your submission script for the following:
srun /opt/software-current/2023.06/x86_64/intel/skylake_avx512/software/CRYSTAL/17-intel-2023a-1.0.2/bin/Pcrystal 2>&1 >& $DIR/${JOB}.out
#SBATCH -A mendoza_q
#SBATCH --exclude=amr-163,amr-178,amr-179
this should submit to our queue, exclude your buyin nodes and flip the jobs to run in the general partition, avoiding incurring cpu.
For a single-point or optimization job script example running CRYSTAL17 in parallel, you can use the following script:
#!/bin/bash
#SBATCH -J MSUCOF-4-CoCp_BULK_OPTGEOM_DZ
#SBATCH -o MSUCOF-4-CoCp_BULK_OPTGEOM_DZ-%J.o
#SBATCH --cpus-per-task=1
#SBATCH --ntasks=32
#SBATCH -A general # or mendoza_q
#SBATCH -N 1
#SBATCH -t 7-00:00:00
#SBATCH --mem-per-cpu=5G
export JOB=MSUCOF-4-CoCp_BULK_OPTGEOM_DZ
export DIR=$SLURM_SUBMIT_DIR
export scratch=$SCRATCH/crys17
echo "submit directory: "
echo $SLURM_SUBMIT_DIR
module purge
module load CRYSTAL/17-intel-2023a
mkdir -p $scratch/$JOB
cp $DIR/$JOB.d12 $scratch/$JOB/INPUT
cd $scratch/$JOB
mpirun -n $SLURM_NTASKS Pcrystal 2>&1 >& $DIR/${JOB}.out
cp fort.9 ${DIR}/${JOB}.f9
To submit a job you have to have to be in a Development Node (e.g. dev-intel14-k20, dev-intel16)
>> ssh dev-intel16
Once you are in a development node, this command is for submitting the .slurm file you just create
>> sbatch submit.slurm
These command lines are for showing job information and resource usages.
>> scontrol show job $SLURM_JOB_ID
>> js -j $SLURM_JOB_ID
Besides the memory and the wall time request, you can also change the number of tasks and number of nodes in the first and second #SBATCH lines, respectively. If you want to run on the mendoza partition, change general to mendoza_q. Make sure to change the job name according to your input file, i.e. replace both instances of "CRYSTAL_parallel" for the name of the .d12 file you want to run.
E.g. if you use the example below for graphene, you would substitute "CRYSTAL_parallel" with "graphene."
If you want to submit to general, but not incur cpu hours, as what happens when they don't go through the buyin, they can try submitting with the following options in their submit scripts:
#SBATCH -A mendoza_q
#SBATCH --exclude=amr-163,amr-178,amr-179
this should submit to your queue, exclude your buyin nodes and flip the jobs to run in the general partition, avoiding incurring cpu.
Geometry Optimization
Running a CRYSTAL17 input file for structure optimization requires entering the appropriate keywords after the atomic coordinates are all listed. To begin optimization, the keyword OPTGEOM must be used, and the portion ends with the keyword ENDGEOM.
Geometry Optimization Example: Graphene
The following is an example of a CRYSTAL input file for a geometry optimization calculation of a sheet of graphene. The keywords used in each section are commented for a quick view of their meaning:
Graphene #Title
SLAB #Structure (MOLECULE: 0D, POLYMER:1D, SLAB: 2D, CRYSTAL: 3D)
77 #Symmetry group (See appendix A in CRYSTAL manual)
2.47 #Lattice parameters (in simplest form)
1 #Number of irreducible atoms to define in structure
6 0.33333 0.66667 0.00000 #Atomic number, positions
OPTGEOM #Optimize geometry
FULLOPTG #Full optimization: atom coordinates and cell parameters
MAXCYCLE #Maximum number of optimization steps
800 #Number of steps
ENDOPT #End geometry optimization section
END #End geometry section
6 8 #Basis set: C_pob_TZVP_rev2
0 0 6 2.0 1.0
13575.349682 0.00022245814352
2035.2333680 0.00172327382520
463.22562359 0.00892557153140
131.20019598 0.03572798450200
42.853015891 0.11076259931000
15.584185766 0.24295627626000
0 0 2 2.0 1.0
6.2067138508 0.41440263448000
2.5764896527 0.23744968655000
0 0 1 0.0 1.0
0.4941102000 1.00000000000000
0 0 1 0.0 1.0
0.1644071000 1.00000000000000
0 2 4 2.0 1.0
34.697232244 0.00533336578050
7.9582622826 0.03586410909200
2.3780826883 0.14215873329000
0.8143320818 0.34270471845000
0 2 1 0.0 1.0
0.5662417100 1.00000000000000
0 2 1 0.0 1.0
0.1973545000 1.00000000000000
0 3 1 0.0 1.0
0.5791584200 1.00000000000000
99 0
END #End Basis set section
DFT #DFT input section
SPIN #Spin polarized solutions
PBE-D3 #Functional to be used
XLGRID #K-points grid (XL as a minimum for calculation of electronic properties)
END #End DFT section
TOLINTEG #Control the accuracy of the bielectronic Coulomb and exchange series.
7 7 7 7 14 #See Crystal17 manual, page 114 and Ch. 17
TOLDEE #SCF convergence threshold
7 #variable: ITOL, meaning of value: 10^-ITOL (7 is good for optimization purposes)
SHRINK #Pack-Monkhorst/Gilat shrinking factors
0 8 #IS: shrinking factor in recip. space // ISP: shrinking factor in denser k-point net (for evaluation of fermi energy and density matrix)
8 8 1 #If IS=0, set IS1, IS2, IS3 (shrinking factors along B1,B2,B3) to be used when the unit cell is highly anisotropic (slabs or heterostructures, for example).
SCFDIR #Monoelectronic and bielectronic integrals are evaluated at each cycle
BIPOSIZE #size of the buffer for bipolar expansion of Coulomb integrals (unit: words)
110000000 #880 mb
EXCHSIZE #size of buffer for bipolar expansion of exchange integrals (unit: words)
110000000 #880 mb
MAXCYCLE #Maximum numer of SCF cycles per ionic step
900 #Number of cycles
FMIXING #Fock/KS matrix mixing parameter
70 #70% Fock/KS matrix mixing (Default is 30%)
ANDERSON #Anderson iterative procedure for Nonlinear Integral Equations
PPAN #Mulliken Population Analysis
END #End of SCF section
To visualize and modify your structures you can use VESTA: https://jp-minerals.org/vesta/en/download.html
For more instructions about VESTA you can use: https://www.youtube.com/channel/UCmOHJtv6B2IFqzGpJakANeg/videos
Here it goes the manuals and examples of input/output/submission for Crystal17.
In RCC, the usage of Crystal17 is essentially the same as Crystal14, you only need to set environment as the following:
export PATH=/gpfs/research/mendozagroup/crystal/2017/v1_0_1_amended/ifort14/bin/Linux-ifort14_XE_emt64/v1.0.1:$PATH
module purge
module load intel/15
module load intel-openmpi
From 182, of the Crystal17 manual:
"The conditions adopted by default in geometry optimization before frequency calculation are different than those considered for normal optimizations in order to obtain much more accurate minima numerical second derivatives. This ensures a good accuracy in the computation of the frequencies and modes. The defaults are:"
TOLDEG 0.00003
TOLDEX 0.00012
FINALRUN 4
MAXTRADIUS 0.25
TRUSTRADIUS .TRUE.
SCF convergence: Given the numerical differentiation , in order to keep the step as small as possible, the accuracy on the SCF convergence (controlled by the keyword TOLDEE) must be very high. TOLDEE must be set in the interval between 9 and 11 for standard frequency calculations. For phonon dispersion calculations is better to set TOLDEE between 10 and 12.
Electronic integrals: Higher tolerances are requested for the integrals selections. TOLINTEG must be set at least at 7, 7, 7, 7, 14 or even 8, 8, 8, 8, 16. Higher values are usually not necessary on well behaved systems (insulating and with no convergence issues), but may be necessary for highly accurate functionals such as B3LYP. For example, few layer Graphene will not converge with default TOLINTEG values, but will with higher values
DFT grid: DFT integration grids should be set to XLGRID or even XXLGRID
Geometry optimization: A crucial point is that in order to obtain reliable vibrational frequencies is necessary to optimize properly the geometry. Optimization should be performed on fractional coordinates as well as lattice parameters. The use of FINALRUN 4 is to be preferred. In this way the code restarts the optimization until the integral selection is not affected by the last geometry updates (recall that the optimizer works under FIXINDEX conditions). Some systems such as molecular crystals are very sensible to this aspect, due to the very flat potential energy surface which makes the location of the minimum more difficult and to the presence of low frequency librational modes which require high accuracy. In such cases it is better to tighten the tolerances on the optimizer (TOLDEG and TOLDEX keywords in the geometry optimization block).
However, if you prefer to use the standard/default settings, you can use:
Example input MIL53_Cr with NO2
CRYSTAL
0 0 0
26
17.8802 11.4337 6.9031 90 90 90
24
13 0.00000 1.00000 1.16408
13 0.50000 0.50000 0.66408
8 0.57997 0.60872 1.25083
6 0.71576 0.72242 1.23941
1 0.68793 0.69960 1.10562
8 0.07997 1.10872 0.75083
6 0.21576 1.22242 0.73941
1 0.18793 1.19960 0.60562
8 0.42003 0.60872 0.57733
6 0.28424 0.72242 0.58875
1 0.31207 0.69960 0.72254
8 0.92003 1.10872 1.07733
6 0.78424 1.22242 1.08875
7 0.81207 1.19960 1.22254
6 0.68130 0.69488 0.41408
6 0.60835 0.63299 0.41408
6 0.18130 1.19488 0.91408
6 0.10835 1.13299 0.91408
8 -0.87258 1.78175 -0.25907
8 -0.78014 1.84248 -0.15848
8 0.00000 1.07983 0.41408
1 0.00000 1.16381 0.41408
8 0.50000 0.57983 0.91408
1 0.50000 0.66381 0.91408
FREQCALC
PREOPTGEOM
FULLOPTG
MAXCYCLE
500
ENDOPT
INTENS
INTRAMAN
INTCPHF
ENDCPHF
RAMEXP
295 532 #K nm_laser
ENDFREQ
ENDG
#BASISSETSECTION
#99 0
#END
DFT
SPIN
B3LYP-D3
XLGRID
END
TOLINTEG
7 7 7 7 14
TOLDEE
7
SHRINK
0 8
8 8 8
SCFDIR
BIPOSIZE
110000000
EXCHSIZE
110000000
SPINLOCK
0 5 #alpha-beta ncycles
MAXCYCLE
800
FMIXING
30
ANDERSON
PPAN
END
DFT
SPIN #Unrestricted calculations
B3LYP
XLGRID
END
SPINLOCK #This assigns the total number of unpair spins
15 4 #Total unpair spins=15 for 4 SCF cyles
ATOMSPIN #This is to assign the 'alignment of spins' from up or down
4 #4 atoms have unpair spins
1 1 5 1 11 1 15 1 #Spin in atom 1 is up, atom 5 up, atom 11 up, atom 15 up. (-1 would be spin down)
For a.u. units, remember that this could mean atomic units of energy, force or length:
https://en.wikipedia.org/wiki/Hartree_atomic_units#Units
E.g.
Atomic unit of | Name | Other equivalents
energy | hartree | 27.211386245988(53) eV
force | | 51.421 eV·Å−1
length | bohr | 0.529177210903(80) Å
The latest version of the scripts can be found in our Github:
https://github.com/mendozacortesgroup/helpscripts/tree/main/code
Earlier versions:
Scripts: Band Input Pathway and Input Files
Plotting Bands and DOSs
Note: The top of the hill structure in the NEB calculations in VASP can be export it to CRYSTAL for TS refinement
The full manual for NEB in VASP and TS search in Crystal is here:
An example of this approach of NEBinVASP vs TS in CRYSTAL can be found in the SI (page S24) of our Nature Materials paper:
Reaction-driven restructuring of defective PtSe2 into ultrastable catalyst for the oxygen reduction reaction (2024-10-07)
https://doi.org/10.1038/s41563-024-02020-w
To restart a calculation:
Make a new directory with name filename_restart
Copy the d12 from the original calculation to the restart folder and name it filename_restart.d12
Copy the OPTINFO.DAT and fort.9 files from the scratch directory to the restart directory. An example of how to copy these files is: 1) cd filename_restart (this changes directory to the filename_restart directory) 2) cp $SCRATCH/crys17/filename/OPTINFO.DAT ./ (this tells the computer to copy the OPTINFO.DAT file from the "filename" scratch directory to the current directory). 3) cp $SCRATCH/crys17/filename/fort.9 ./ (copies the fort.9 file from the scratch into the current directory).
In the restart d12 file, add the RESTART keyword right after OPTGEOM, also add the keyword GUESSP in the SCF block (for example right before the DIIS keyword).
Copy the submission file (filename.submit) to the restart directory, change all instances of the file name to filename_restart and add the lines: cp $DIR/fort.9 $scratch/$JOB/fort.20 and cp $DIR/OPTINFO.DAT $scratch/$JOB/OPTINFO.DAT (they go right after mkdir -p $scratch/$JOB)
Submit your file with sbatch filename_restart.submit
When attempting to do this, remember to change all the instances of filename mentioned above to your specific file name. For example, if your input file is KCrO2.d12, your filename is just KCrO2.
Note that there are two commands that I add: RESTART which reads the file OPTINFO.DAT from the original calculation, and GUESSP which reads the fort.9 (f9) file from the original calculation (you have to copy the f9 as fort.20 into the restart scratch directory).
Also, you shouldn't change anything in the restart d12 except for the addition of those 2 keywords. Geometry and all other parameters should stay the same.
P.S. Sometimes, I've gotten issues when using GUESSP on my heavy calculations. Depending on your case, you might want to remove it.
For a job script example running CRYSTAL17 in serial, you can use the following script:
#!/bin/bash --login
#SBATCH --mem-per-cpu=5G
#SBATCH --time=04:00:00
#SBATCH --job-name CRYSTAL-Serial
# cp /mnt/research/mendozacortes_group/CRYSTAL17/test_cases/inputs/test10.* ./
ml -* CRYSTAL/17-intel-2023a
runcry17 CRYSTAL-Serial
>> scontrol show job $SLURM_JOB_ID
>> js -j $SLURM_JOB_ID
The last 2 command lines are for showing job information and resource usages.
You can adjust the memory and the walltime request in the first and second SBATCH lines respectively.
The CRYSTAL17 input file is assumed to be CRYSTAL-Serial.d12 or CRYSTAL-Serial.d3.
Make sure to change the job name according to your input file, i.e. replace both instances of "CRYSTAL_Serial" for the name of the .d12 file that you want to run.
#!/bin/bash
#SBATCH -J graph_Ac
#SBATCH -o graph_Ac-%J.o
#SBATCH -N 1
#SBATCH -n 28
##SBATCH -p general
#SBATCH -t 48:00:00
cat $SLURM_JOB_NODELIST
export JOB=graph_TEST
export DIR=$SLURM_SUBMIT_DIR
cp $DIR/${JOB}.d12 INPUT
source /mnt/research/mendozacortes_group/CRYSTAL17/cry17.bashrc
ml -* iccifort/2018.1.163-GCC-6.4.0-2.28 OpenMPI/2.1.0
MPINodes > MachineFile
mpirun -machinefile ./MachineFile $CRY17_EXEDIR/v1.0.2/Pcrystal
#cp fort.9 ${DIR}/${JOB}.f9
#cp fort.25 ${DIR}/${JOB}.f25
#cp BAND.DAT ${DIR}/${JOB}.BAND
#cp DOSS.DAT ${DIR}/${JOB}.DOSS
#rm fort* #to remove all the files generated
#!/bin/bash
#SBATCH -J graph_Ac
#SBATCH -o graph_Ac-%J.o
#SBATCH -N 1
#SBATCH -n 28
#SBATCH -p engineering_long
#SBATCH -t 48:00:00
cat $SLURM_JOB_NODELIST
export JOB=graph_Ac
export DIR=$SLURM_SUBMIT_DIR
export PATH=/gpfs/research/mendozagroup/crystal/2017/v1_0_1b/ifort14/bin/Linux-ifort14_XE_emt64/v1.0.1:$PATH
# export PATH=/gpfs/research/mendozagroup/crystal/2017/v1_0_2/ifort14/bin/Linux-ifort14_XE_emt64/v1.0.2:$PATH
export scratch=/gpfs/research/mendozagroup/scratch/crystal/${USER}/crys17
echo "submit directory: "
echo $SLURM_SUBMIT_DIR
module purge
module load intel/16
module load intel-openmpi
rm -fr $scratch/$JOB
mkdir -p $scratch/$JOB
# the following line need be modified according to where your input is located
cp $DIR/${JOB}.d12 $scratch/$JOB/INPUT
#cp $DIR/${JOB}.f9 $scratch/$JOB/fort.9
cd $scratch/$JOB
touch hostfile
rm hostfile
for i in `scontrol show hostnames $SLURM_JOB_NODELIST`
do
echo "$i slots=28" >> hostfile
done
# in the following, -np parameters should be equal to those specified above.
#mpirun -np 8 -machinefile hostfile Pcrystal >& $DIR/${JOB}.out
#mpirun -np 4 -machinefile hostfile Pproperties >& $DIR/${JOB}.DOSS.out
#srun -n 8 Pcrystal >& $DIR/${JOB}.out
mpirun -np 28 -machinefile hostfile Pcrystal >& $DIR/${JOB}.out
#mpirun -np 16 -machinefile hostfile MPPcrystal >& $DIR/${JOB}.out
#cp fort.9 ${DIR}/${JOB}.f9
#cp fort.25 ${DIR}/${JOB}.f25
#cp BAND.DAT ${DIR}/${JOB}.BAND
#cp DOSS.DAT ${DIR}/${JOB}.DOSS
# uncomment the next 5 lines if you want to remove the scratch directory
#if [ $? -eq 0 ]
#then
#cd ${DIR}
#rm -rf $scratch/${JOB}
To submit jobs in a given working directory, here is an outline for a script to submit all jobs to the cluster:
Working Directory Mass Submission
#!/usr/bin/env python 3
import os, sys, math
"""Run in a given working directory""
data_folder = os.getcwd()
data_files = os.listdir(data_folder)
"""Reads each file given by data_folder and loops through to find the average bond length"""
for file_name in data_files:
if ".d12" in file_name:
submit_name = file_name.split(".d12")[0]
os.system(<auto submit script> + " " + submit_name + " 100")
The auto submit script can be called from a stored directory which auto performs the submission script above. Here is a possible example below:
Auto-generate submission script
echo '#!/bin/bash' > $1.sh
echo '#SBATCH -J '$1 >> $1.sh
out=$1
file=-%J.o
outfile=$out$file
echo '#SBATCH -o '$outfile >> $1.sh
echo '#SBATCH -N 1' >> $1.sh
echo '#SBATCH --ntasks-per-node=16' >> $1.sh
echo '#SBATCH -p mendoza_q' >> $1.sh
time=$2
wall=:00:00
timewall=$time$wall
echo '#SBATCH -t '$timewall >> $1.sh
#echo '#SBATCH --mail-type=ALL' >> $1.sh
echo '# remove the following line if already in your .bashrc file
export PATH=/gpfs/research/mendozagroup/crystal/2017/v1_0_1b/ifort14/bin/Linux-ifort14_XE_emt64/v1.0.1:$PATH' >> $1.sh
echo 'export JOB='$1 >> $1.sh
echo 'export DIR=$SLURM_SUBMIT_DIR
export scratch=/gpfs/research/mendozagroup/scratch/crystal/${USER}/crys17
echo "submit directory: "
echo $SLURM_SUBMIT_DIR
module purge
module load intel/15
module load intel-openmpi
mkdir -p $scratch/$JOB
# the following line need be modified according to where your input is located
cp $DIR/$JOB.d12 $scratch/$JOB/INPUT
cd $scratch/$JOB
touch hostfile
rm hostfile
for i in `scontrol show hostnames $SLURM_JOB_NODELIST`
do
echo "$i slots=16" >> hostfile
done
# in the following, -np parameters should be equal to those specified above.
mpirun -np 16 -machinefile hostfile Pcrystal 2>&1 >& $DIR/${JOB}.out
cp fort.9 ${DIR}/${JOB}.f9
# uncomment the next 5 lines if you want to remove the scratch directory
#if [ $? -eq 0 ]
#then
# cd ${DIR}
# rm -rf $scratch/${JOB}
#fi' >> $1.sh
sbatch $1.sh
The mass submission can also be used for any other CRYSTAL input by changing the input format (e.g. .d3 _BAND.d3, _DOSS.d3 for the appropriate input file).
Usage of Crystal at atom/ion & Fermion
Submission Script, Parallel version