The objective of this post is to build the High Performance LINPACK (HPL) with different configurations and different basic linear algebra subprograms (BLAS) & massage passing interface (MPI) libraries:
You must install BLAS (any implementation) on your Linux platform before compiling HPL. You also should know where BLAS is installed.
1. Go to OpenBLAS official website and download source code.
2. Use following command to download zip file to your Linux. The download link has to end in tar.gz) to Linux
wget http://github.com/xianyi/OpenBLAS/archive/v0.2.20.tar.gz3. Untar file and save OpenBLAS directory in proper place. Probably your $HOME.
cd OpenBLAS-0.2.204. Compile source code using following command (OpenBLAS does not support f77, GCC or Intel are supported.)
make FC=gfortran If you use Intel Xeon Haswell (AVX2) platform, it might cause some problematic error, please add TARGET=SANDYBRIDGE to use Sandy Bridge instead.
make FC=gfortran TARGET=SANDYBRIDGE5. When process is completed without error, following message should appear.
OpenBLAS build complete. (BLAS CBLAS LAPACK LAPACKE) OS ... Linux Architecture ... x86_64 BINARY ... 64bit C compiler ... GCC (command line : gcc) Fortran compiler ... GFORTRAN (command line : gfortran) Library Name ... libopenblas_sandybridgep-r0.2.20.a (Multi threaded; Max num-threads is 24)To install the library, you can run "make PREFIX=/path/to/your/installation install". 6. Install OpenBLAS in either the OpenBLAS-0.2.20 directory or new directory. For me, I created new directory and install executable there
mkdir /home/ragnsiman/OpenBLASmake /home/rangsiman/OpenBLAS-0.2.20 installInstallation is finished very quickly
7. Check if there is BLAS libraries in directory that you installed OpenBLAS. For example,
ls /home/rangsiman/OpenBLAS/lib8. Done
1. Go to GotoBLAS2 website and browse source code.
2. Download source code (in *tar.gz) to Linux
wget https://www.tacc.utexas.edu/documents/1084364/1087496/GotoBLAS2-1.13.tar.gz3. Uncompress zip file and 'cd' to GotoBLAS2 directory.
tar -xzvf GotoBLAS2-1.13.tar.gzcd GotoBLAS24. Type 'make' to build
makeIf error occurs, try following command
make TARGET=NEHALEM5. libgoto2.a should be created and automatically linked to top-directory.
Read this post for installation of OpenMPI: https://sites.google.com/site/rangsiman1993/linux/install-openmpi
For this HPL installation, I used OpenMPI 2.0.2.
BLAS is part of HPL. Download and install the latest of HPL. I am using HPL version 2.2 of this installation.
1. Go to Netlib repository of HPL.
2. Browse hpl-VERSION.tar-gz and use following command to download this zip file to Linux
3. Uncompress the gzip file using following command
tar -xzvf hpl-2.2.tar.gz4. Copy Make.<foo> file in setup directory to parent directory of HPL.
cp setup/Make.Linux_PII_CBLAS $HOME/hpl-2.25. Rename Make.Linux_PII_CBLAS to Make.<arch>, where <arch> is you Linux platform architecture (for example linux64).
mv Make.Linux_PII_CBLAS Make.linux646. Edit Make.intel64 file. Point $TOPdir to top directory of HPL. For example,
70 TOPdir = $(HOME)/hpl-2.28. Point MPI directory for MPdir and MPI libraries sub-directory for MPlib. For example,
84 MPdir = /usr/local/mpi/openmpi-2.0.285 MPinc = -I$(MPdir)/include86 MPlib = $(MPdir)/lib/libmpi.aFor OpenMPI use "libmpi.a" instead of "libmpich.a". Make sure that this library*.a file is in $OPENMPI/lib directory.
If you have no static library (*.a), shared-object library (*.so) can be used but you have to add that library into LD_LIBRARY_PATH before compile.
86 MPlib = $(MPdir)/lib/libmpi.soexport LD_LIBRARY_PATH="/path/of/directory/libmpi.so/$LD_LIBRARY_PATH9. Edit Linear Algebra library configuration as following, for example, OpenBLAS
95 LAdir = $(HOME)/OpenBLAS96 LAinc =97 LAlib = $(LAdir)/lib/libopenblas.a10. Edit C compiler
169 CC = mpicc176 LINKER = mpicc11. Build HPL executable, use following command
make arch=linux64This compilation will take several minutes.
12. If install is finished successfully, there will be HPL executable called xhpl in HPL_TOP/bin/<arch> directory, for example,
ls hpl-2.2/bin/linux64/xhplHPL input file called HPL.dat will also be created.
Navigate to compiled executable. Use following command to determine performance of HPL with MPI
mpirun -np N xhpl > xhpl_results_linux64where N is number of MPI processes
The performance parameter given in HPL.dat input file can be (and should be) adjusted for evaluation or production test.
You can compare your benchmark results Rmax with the estimated results based on your computer spec. http://hpl-calculator.sourceforge.net/
1. Install Intel Parallel Studio XE for Linux (free of charge for student or educator or lecturer or even open-source developer)
2. Like building HPL executable by using BLAS and MPI, Intel Parallel Studio XE suite also provided the necessary package for compiling LINPACK, such as optimized MPI and linear algebra implementation, which provided in MKL suite.
Suppose that the top directory of Intel Parallel Studio XE is /home/rangsiman/intel and set as $INTEL_TOP environment variable.
$INTEL_TOP/mkl$INTEL_TOP/compilers_and_libraries/linux/mkl3. Create new Make.<arch> file for intel64 platform, for example, copy and rename to Make.intel64
cp Make.Linux_PII_CBLAS Make.intel644. Edit Make.intel64 file by following step 6 - 10 in Build HPL executable using BLAS section and do not forget to change compiler from mpicc to mpiicc
You must carefully specify the path and library of MKL and MPI in Make.intel64 file correctly.
5. Compile HPL executable using following command
make arch=INTEL64It can take several minutes.
To run HPL benchmark using xhpl executable which built by Intel MKL and MPI, you must make sure that their libraries have been added to PATH, as well as LD_LIBRARY_PATH environment variables. Also you should execute compilervars.sh and mpivars.sh. (available in Intel Parallel Studio XE directory) for setting up necessary environment variables before calling MKL and MPI.
To run HPl benchmark provided by Intel MKL Benchmark library, browse benchmark folder in mkl.
Execute a binary file to run benchmark
runme_intel64_dynamicResults will be saved as a text file called xhpl_intel64_dynamic_outputs.txt.
Rangsiman Ketkaew