No any introductions, just scroll down and follow it!
This installation workflow is tested on RHEL 7 and TAIWANIA cluster, Intel Xeon Gold 6148 2.40GHz (64 bit system) cluster equipped with Intel Omni-Path high-speed interconnect technology. LAMMPS is built by Intel compiler, Intel MPI and MKL in Intel Parallel Studio XE 2018 update 1 and GCC 6.3.0 compiler.
This is going to install LAMMPS with USER-INSTALL. If you want LAMMPS to include more packages, please skip this one and find an intermediate version below!
1. Download program source code from LAMMPS website or clone LAMMPS repository using Git clone.
2. Uncompress the tarball file and navigate to LAMMPS top directory.
tar -xzvf lammps.tar.gz3. Create a bash script, for example, install_lmp_intel.sh, like this:
#!/bin/bash# Install LAMMPS with USER-INTEL packagemodule purgemodule load intel/2018_u1module load gcc/6.3.0export LMP_ROOT=$HOME/lammps-stable_16Mar2018_RKexport LMP_LIB=$LMP_ROOT/libexport INTEL_TOP=/pkg/intel/2018_u1source $INTEL_TOP/parallel_studio_xe_2018/psxevars.shsource $INTEL_TOP/compilers_and_libraries/linux/mpi/intel64/bin/mpivars.shcd $LMP_ROOT/src/make yes-asphere yes-class2 yes-kspace yes-manybody yes-misc yes-moleculemake yes-mpiio yes-opt yes-replica yes-rigidmake yes-user-omp yes-user-intelmake intel_cpu_intelmpi -j 16 > $LMP_ROOT/make_lmp_cpu_intel64_std.log#make knl -j 164. Change file permission and execute the script:
chmod +x install_lmp_intel.sh./install_lmp_intel.sh5. When installation is done, LAMMPS executable, lmp_intel_cpu_intelmpi, will be created at src directory.
The following installation workflow is mainly designed myself for TAIWANIA cluster. Please use it carefully !
1. Suppose that a LAMMPS directory is located at /home/rangsiman/lammps-stable_16Mar2018
2. Create a bash script, for example, install_lmp_intel_mpi.sh, like this:
#!/bin/bash# This LAMMPS installation workflow was tested on Intel Xeon Gold 6148 cluster.# Check package table in this website for library dependence.# https://lammps.sandia.gov/doc/Section_packages.htmlmodule purgemodule load intel/2018_u1 gcc/6.3.0export LMP_ROOT=$HOME/lammps-stable_16Mar2018_FULL2export LMP_LIB=$LMP_ROOT/libexport LMP_LOG=$LMP_ROOT/make_lmp_intel64_cpu.logexport INTEL_TOP=/pkg/intel/2018_u1source $INTEL_TOP/parallel_studio_xe_2018/psxevars.shsource $INTEL_TOP/compilers_and_libraries/linux/mpi/intel64/bin/mpivars.shecho "Start on `date`" > $LMP_LOGecho "Build LAMMPS 'lmp_intel_cpu_intelmpi'" >> $LMP_LOGecho "######################################################" >> $LMP_LOGecho "1. Build library for some packages" >> $LMP_LOG#make clean-all# make lib-package args="-m mpi" means "cd $LMP_LIB/package && make -f Makefile.mpi"cd $LMP_LIB/reaxmake -f Makefile.gfortranecho `ls *.a` >> $LMP_LOGcd $LMP_LIB/meammake -f Makefile.ifortecho `ls *.a` >> $LMP_LOGcd $LMP_LIB/poemsmake -f Makefile.mpiecho `ls *.a` >> $LMP_LOGcd $LMP_LIB/colvarsmake -f Makefile.g++echo `ls *.a` >> $LMP_LOGcd $LMP_LIB/awpmdmake -f Makefile.mpiecho `ls *.a` >> $LMP_LOGcd $LMP_LIB/linalgmake -f Makefile.gfortranecho `ls *.a` >> $LMP_LOGcd $LMP_LIB/qmmmmake -f Makefile.mpiecho `ls *.a` >> $LMP_LOG#cd $LMP_LIB/atc#make -f Makefile.mpi | tee -ai $LMP_LOG#echo `ls *.a` >> $LMP_LOGcd $LMP_ROOT/src/make lib-smd args="-b"echo "Done on `date`" >> $LMP_LOGecho "######################################################" >> $LMP_LOGecho "2. Include all packages." >> $LMP_LOGcd $LMP_ROOT/src/make yes-allecho "Done on `date`" >> $LMP_LOGecho "######################################################" >> $LMP_LOGecho "3. Exclude some packages that are not required." >> $LMP_LOGmake no-VORONOImake no-KIMmake no-GPUmake no-LATTEmake no-MSCG#make no-KOKKOS#make no-USER-ATCmake no-USER-QUIPmake no-USER-VTKmake no-USER-H5MD#make no-USER-QMMM#make no-USER-NETCDF#make no-USER-MOLFILEmake no-USER-VTK# If error while compiling user-intel package, uninstall and install package again.#make no-USER-INTEL#make yes-USER-INTELecho "Done on `date`" >> $LMP_LOGecho "######################################################" >> $LMP_LOGecho "4. Install LAMMPS with Intel makefile." >> $LMP_LOGmake intel_cpu_intelmpi -j 16 >> $LMP_LOGecho "Done on `date`" >> $LMP_LOGecho "######################################################" >> $LMP_LOG3. Change file permission and execute the script:
chmod +x install_lmp_intel_mpi.sh./install_lmp_intel_mpi.sh4. When installation is done, a LAMMPS executable, lmp_intel_cpu_intelmpi, will be created at src directory.
The errors that I found while I was compiling LAMMPS using make and solution.
1. If Makefile.lammps of package is not found in its library directory. You should build the static library of package again.
1.1 For example, Makefile.lammps not found in ../../poems , you should build the static library of POEM package by following.
cd $LMP_LIB/poemsmake -f Makefile.mpi1.2 Then check if Makefile.lammps and libpoems.a are created.
ls Makefile.lammps libpoems.a1.3 If these files exists, try making LAMMPS again.
cd $LMP_ROOT/srcmake intel_cpu_intelmpi -j 16 | tee -ai $LMP_ROOT/make_lmp_cpu_intel64_log2. If compilation ran into error of "ld: unable to to locate -lompstub.", you can make symbolic link of libiompstubs5.so to libompstub.so for workaround.
cd $INTEP_TOP/lib/intel64ln -s libiompstubs5.so libompstub.soThen add FULL PATH of directory where libompstrub.so is to your PATH and LD_LIBRARY_PATH environment variables in .bash resource configuration file as well as add to Makefile.intel_cpu_intelmpi file. For example,
2.1 Add to PATH and LD_LIBRARY_PATH
export PATH=$INTEL_TOP/lib/intel64:$PATHexport LD_LIBRARY_PATH=$INTEL_TOP/lib/intel64:$LD_LIBRARY_PATH2.2 Edit Makefile.intel_cpu_intelmpi file in MAKE directory
vi $LMP_ROOT/src/MAKE/OPTIONS/Makefile.intel_cpu_intelmpiand add library in format -L<full_path_directory_of_dynamics_library> -l/<name_of_library> to LIB
LIB = -ltbbmalloc -L/home/rangsiman/intel_parallel_studio_xe_2018_update1_cluster_edition/lib/intel64/ -lompstub2.3 Finally, try making LAMMPS executable again.
cd $LMP_ROOT/srcmake intel_cpu_intelmpi -j 16 | tee -ai $LMP_ROOT/make_lmp_cpu_intel64_log3. Error "more undefined references to `for_dealloc_allocatable' follow" occurs at compilation is almost finished.
3.1 Edit Makefile.intel_cpu_intelmpi file in MAKE directory
vi $LMP_ROOT/src/MAKE/OPTIONS/Makefile.intel_cpu_intelmpiand add following library to LIB
LIB = -ltbbmalloc -lifcore3.2 Then try making again
cd $LMP_ROOT/srcmake intel_cpu_intelmpi -j 16 | tee -ai $LMP_ROOT/make_lmp_cpu_intel64_log4. Error while compiling shows "mpi.h not found"
Make sure that you have loaded or added MPI library of Intel MPI and GCC compiler to your current PATH. Also you can use following command to check which package provides this file.
locate mpi.hThis should help your compilation to finish smoothly.
1. Create bash shell script called run_lmp_intel.sh
#!/bin/bash# Normal usage: ./run_lmp_intel.sh inut.in N# N is number of MPI ranksif [ ! -e $1 ]; then echo "Error: $1 not found!" ; exit 1 ; fiif [ $2=="" ]; then export NPAL="1"; else export NPAL="$2" ; fiexport LMP_INP="$1"export INP_NAME="`basename $LMP_INP .in`"export LMP_OUT="$INP_NAME".logexport MPI="mpiexec" #MPI run environmentexport LMP_ROOT="$HOME/lammps-stable_16Mar2018" #LAMMPS top directoryexport LMP_EXE="lmp_intel_cpu_intelmpi" #name of executableexport LMP_BIN="$LMP_ROOT/src/$LMP_EXE" #Binary fileexport LMP_CORES="$NPAL" #number of cores (mpi ranks)export LMP_THREAD_LIST="1" #OMP Threads per processexport KMP_BLOCKTIME="0"export I_MPI_PIN_DOMAIN=core #pin each MPI process to a coreexport I_MPI_FABRICS=shm #communication fabric = shared mem#Following line add -suffix intel for USER-INTEL package.#export LMP_ARGS="-scrren none -pk intel 0 -sf intel" #arg listexport LMP_ARGS="screen none -pk intel 0" #arg listexport OMP_NUM_THREADS=$LMP_THREAD_LISTecho "Running $LMP_OUT"$MPI -np $LMP_CORES $LMP_BIN -in $LMP_INP -log $LMP_OUT $LMP_ARGS2. Execute script with LAMMPS input as 1st argument and number of MPI processes as 2nd argument. For example,
./run_lmp_intel.sh atoms.in 48This command is going to run LAMMPS with 48 MPI processes and spawn OMP thread = 1 on each process.
You can check package table in following website for library dependence https://lammps.sandia.gov/doc/Section_packages.html
Rangsiman Ketkaew