TOP PAGE‎ > ‎Code_Asterの並列化‎ > ‎

Parallel Code_Aster 12.4 (=13.0) (English)

* I wrote this method, referring to posts by tamaskovics and Thomas DE SOZA of this topic in the forum.


OS:Ubuntu 14.04
Code_Aster : ver.12.4


Download files

Save location for files :  ~/Install_Files
Install location : /opt and /opt/aster

Download and locate lower files to Install_Files

 site  file name
 Code_Aster  aster-full-src-12.4.0-1.noarch.tar.gz
 OpenBLAS  OpenBLAS-0.2.15.tar.gz
 PETSc  petsc-3.4.5.tar.gz

Change owner of install location

Change owner of Install location to log-in user from root user.
$ sudo chown username /opt/

Install libraries for Code_Aster

Install libraries for Code_Aster.
$ sudo apt-get install gfortran g++ python-dev python-qt4 python-numpy liblapack-dev libblas-dev tcl tk zlib1g-dev bison flex checkinstall openmpi-bin libopenmpi-dev libx11-dev cmake qt4-dev-tools libmotif-dev

Compile OpenBLAS

Compile OpenBLAS (Math LIbrary for Code_Aster).

$ cd ~/Install_Files/

$ tar xfvz OpenBLAS-0.2.15..tar.gz

$ cd OpenBLAS-0.2.15


$ make PREFIX=/opt/OpenBLAS install

$ echo /opt/OpenBLAS/lib | sudo tee -a /etc/

$ sudo ldconfig

Compile Code_Aster (sequential)

Compile Code_Aster (sequential) with OpenBLAS.

$ cd ~/Install_Files

$ tar xfvz aster-full-src-12.4.0-1.noarch.tar.gz

$ cd aster-full-src-12.4.0/

$ sed -i "s:PREFER_COMPILER\ =\ 'GNU':PREFER_COMPILER\ =\'GNU_without_MATH'\nMATHLIB=\ '/opt/OpenBLAS/lib/libopenblas.a':g" setup.cfg

$ python install

Make host-file for parallel calculation.
$ echo "$HOSTNAME cpu=$(cat /proc/cpuinfo | grep processor | wc -l)" > /opt/aster/etc/codeaster/mpi_hostfile

Compile ScaLAPACK

$ cd ~/Install_Files

$ tar xfvz scalapack_installer.tgz

$ cd scalapack_installer_1.0.2

$ ./ --lapacklib=/opt/OpenBLAS/lib/libopenblas.a --mpicc=mpicc --mpif90=mpif90 --mpiincdir=/usr/lib/openmpi/include --ldflags_c=-fopenmp --ldflags_fc=-fopenmp --prefix=/opt/scalapack

An error message "BLACS: error running BLACS test routines xCbtest" will show up after compilation.

But you succeed , if there is an file "/opt/scalapack/lib/libscalapack.a".

Compile MUMPS

Copy mumps-4.10.0 in Code_Aster's source files to '/opt.
Compile by mpi-compiler

$ cp ~/Install_Files/aster-full-src-12.4.0/SRC/mumps-4.10.0-aster3.tar.gz /opt/

$ cd /opt

$ tar xfvz mumps-4.10.0-aster3.tar.gz

$ mv mumps-4.10.0 mumps-4.10.0_mpi

$ cd mumps-4.10.0_mpi/

Change '' as MUMPS4.10.0

$ make all

Compile PETSc with HYPRE and ML

$ cp ~/Install_Files/petsc-3.4.5.tar.gz /opt

$ cd /opt

$ tar xfvz petsc-3.4.5.tar.gz

$ cd petsc-3.4.5

$ ./config/ --with-mpi-dir=/usr/lib/openmpi --with-blas-lapack-lib=/opt/OpenBLAS/lib/libopenblas.a  --download-hypre=yes --download-ml=yes --with-debugging=0 COPTFLAGS=-O1 CXXOPTFLAGS=-O1 FOPTFLAGS=-O1 --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions  --with-x=0 --with-shared-libraries=0

$ make PETSC_DIR=/opt/petsc-3.4.5 PETSC_ARCH=arch-linux2-c-opt all

$ make PETSC_DIR=/opt/petsc-3.4.5 PETSC_ARCH=arch-linux2-c-opt test

Compile Code_Aster (parallel)

Change a part of 'mpi_get_procid_cmd' of '/opt/aster/etc/codeaster/asrun' to as following with text editor.
mpi_get_procid_cmd : echo $OMPI_COMM_WORLD_RANK

Change following numbers if the number of processors is over 32.
batch_mpi_nbpmax : 32
interactif_mpi_nbpmax : 32

Decompress Code_Aster 12.4 archive file and move directory.

$ cd ~/Install_Files
$ cd aster-full-src-12.4.0/SRC

$ tar xfvz aster-12.4.0.tgz

$ cd aster-12.4.0

Put configure files for parallel CA : in the current directory, and compile parallel Code_Aster.
$ export ASTER_ROOT=/opt/aster
$ ./waf configure --use-config-dir=$ASTER_ROOT/12.4/share/aster --use-config=Ubuntu_gnu_mpi --prefix=$ASTER_ROOT/PAR12.4
$ ./waf install -p

Add 'vers : PAR12.4:/opt/aster/PAR12.4/share/aster' below 'vers : testing' in '/opt/aster/etc/codeaster/aster' with text editor, then 'PAR12.4' registered on ASTK.

Setting up comm file

Choose 'MUMPS' as SOLVEUR of MECA_STATIQUE or STAT_NON_LINE or ... , as lower image.
Options below b_mumps are for memory decrease. Effects of options are showed in a lower table.

memory decrease
calculation time
 not change

Setting up ASTK

Input the number of processors in 'Options'.
ncpus='number of processors of OpenMP (OpenBLAS)'
mpi_nbcpu='number of processors of MPI'

mpi_nbnoeud='number of nodes'

If you set ncpus=2, mpi_nbcpu=2, 2*2=4 processors will run.

Change 'Versions' of ASTK to 'PAR12.3' as a image.

Push 'Run button' and Run parallel calculation.