boy007 reviews topics

MPI4YOU - Ubuntu LTS 12.4 for parallel calculations, companion whit: ltsp, mpich2, elmer, netgen & OpenFOAM


This page is under construction it has 'recepy' used to create easily manageable environment to run parallel solvers ( netgen,OpenFOAM and Elmer,... )
whit AMD Buldozer hardware at Ubuntu 12.4 LTS

This system is not fully tested, it's a like installation note's storage - it can help you.
I have done lot's experiments and to get all work together is not easy task, one software work's whit one and not all sofware's, compatibelity is real problem.

It can be Ubuntu is not good floor to build scientific calculations, based past experience but that's  what I'm doing now.
It's worth to start from Scientific Linux Distribution - but I can not recommend personally due no experience from that anyway it looks better!!

Files and future development of scripts, makefiles and changes can be done at . You are welcome !

-one installation, easy and fast to distribute to cluster/workstations ( that's ltsp )
-parallel (mpich2), threads, OpenMP
-multi purpose use, all node's as well server can work as workstation or black box ( not recommended practice by Ubuntu )
-energy, resource efficiency, ecological solutions (hardware configuration, sleeping,..)

-one installation and shareing it via netboot, it works whit ltsp but at moment ltsp closes out energy management options
-separately installed, more management work but energy saving options are working and that's money !

Generalideas, guide lines

Hardware in use:

old server:
- CPU AMD x 4
-ram 8G
-0.5T raid 1
-nfs4 file server RAID1, syslog-nq, bacula
-bought from

new server:
-CPU AMD FX(tm)-8150 Eight-Core Processor × 8 it's Bulldozer CPU
-ram 16G
-main board AsROCK990FXextreme3
-1T raid 1
-120G raid 0 , two 2x 300-500kt/s = 600-1000kt/s SATA3 Kingston 60G SSD's
-ltsp-server standalone,nfs4, file server RAID1, ssd RAID0
-bought from

Work Stations:

-CPU AMD FX(tm)-8150 Eight-Core Processor × 8 it's Bulldozer CPU
-ram 16G
-main board AsROCK990FXextreme3
-bought from , yes price is ok, they deliver quite promptly whitin
     2-4week's from order but if you have problem's whit product's it takes time - avoid them if you can!

System specifications:

Two identical servers, if another out of use other will take responsibility to run services.
- Ubuntu 12.4 LTS, kernel 3.2.0-36-generic
- DNS, bind9
- DHCP, ISC DHCP, Fail-over primary and secondary servers
- NFS4
- mpich2 for parraler calculation's
- ltsp , ltsp-server-standalone
- syslog-ng, install ubuntu
- bacula
- openssh
- apt-casher-ng ( you need to log )
- quata ( to avoid system crash due no room at hard disk )
- OpenFOAM
- Elmer ( rev. 6033 ) whit pardiso
- Salomè meca ( 6.5 )
- cacti - to manage hardware, scripts needed temp/fan/voltage
- netgen ( rev. 623 )

pre-build binaryes:

sources & library installation:

acml (blas,lapack),
Scalpack ( BLACS )
elmerfem ( rev 5912 )
netgen ( rev 596 )


 Ubuntu   DHCP. DNS, bind9

RSH-redone / SSH


  parraler solvers


core operateing system boot
    ltsp integration

SNMP to see what happens at hardware

Ltsp is set of script's and package to manage and create
configuration files, operating system distribution
over network (netboot). Orginaly it was used to supply software
to terminals (light computers) but why not use it to create cluster!!!
My Idea is to make one installation to sever and share it
to workstation's and nodes instead of having separate computers
installed separately. This cut downs installation and management duties
remarkably. Compared MAAS, EC2 & JUJU could cause unwanted overhead at parallel
computing at local network as well more management tasks.

Unsolved problem's whit installation ( tested kernel 3.2.0-35-generic ) :

1.a  blcr supports kernel's older that 2.6.38, NO SUPPORT TO KERNEL - NO CHECKPOINTS , solution under work currently it dose not work!!!
1.b DMTCP dose not work eighter, it dose memory violation.
Change: blcr 0.5.0 compiles well whit 3.2.0-49-generic but not fully tested yet !

2. wakeonlan dose not work whit LTPS,

Fixe's coming soon to precise: new blcr version, ACPI corrections to kernel.


Server's have IP and
ltsp client's are useing IP's starting

Basic Server Installation tricks:

Compatible Usage Rights

At moment best solution have been:
nsh drives:  group = nogroup
user must belong nogroup
/etc/passwd must be same ltsp and Ubuntu !!!
Both server and ltps nodes have same /home/user/.ssh/ssh-keys

To Install

-install basic configuration from CD-ROM only. This way you avoid re-installation due broken packages that can be tricky to find
-install server version so you can configure RAID disks, ext,... then

apt-get install Ubuntu-Desktop

-last step is graphics card drivers.If you have nvidia based graphics card first:

apt-get kernel-headers kernel-sources
and after that

apt-get nvidia-current nvidia-common

Install system related applications:

apt-get install apt-cacher-ng

apt-get install nfs-common nfs-kernel

apt-get install quota

apt-get install ltsp-server-standalone

apt-get install syslog-ng apt-cacher-ng bacula ntp

apt-get install ubuntu-desktop

apt-get install linux-headers kernel-source

apt-get install nvidia-current nvidia-common


apt-get remove ufw
apt-install firestarter

# for fun
apt-get install samba
apt-get install wine jstest-gtk wine playonlinux ubuntu-restricted-extras mythplugins
apt-get install openjdk-7-jre icedtea-7-plugin


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static

Ubuntu IPV6 - Ubuntu Wiki


Ssh OpenMPI

apt-get install openssh-client openssh-server

gedit /etc/ssh/sshd_config
# Package generated configuration file
# See the sshd_config(5) manpage for details

# What ports, IPs and protocols we listen for
Port 22
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
Protocol 2
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes

# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 768

# Logging
SyslogFacility AUTH
LogLevel INFO

# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes

RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile    %h/.ssh/authorized_keys

# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes

# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no

# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no

# Change to no to disable tunnelled clear text passwords
#PasswordAuthentication yes

# Kerberos options
#KerberosAuthentication no
#KerberosGetAFSToken no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes

# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes

X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no

#MaxStartups 10:30:60
#Banner /etc/

# Allow client to pass locale environment variables
AcceptEnv LANG LC_*

Subsystem sftp /usr/lib/openssh/sftp-server

# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication.  Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes

gedit /etc/ssh/ssh_config
# This is the ssh client system-wide configuration file.  See
# ssh_config(5) for more information.  This file provides defaults for
# users, and the values can be changed in per-user configuration files
# or on the command line.

# Configuration data is parsed as follows:
#  1. command line options
#  2. user-specific file
#  3. system-wide file
# Any configuration value is only changed the first time it is set.
# Thus, host-specific definitions should be at the beginning of the
# configuration file, and defaults at the end.

# Site-wide defaults for some commonly used options.  For a comprehensive
# list of available options, their meanings and defaults, please see the
# ssh_config(5) man page.

Host *
#   ForwardAgent no
#   ForwardX11 no
#   ForwardX11Trusted yes
#   RhostsRSAAuthentication no
#   RSAAuthentication yes
#   PasswordAuthentication yes
#   HostbasedAuthentication no
#   GSSAPIAuthentication no
#   GSSAPIDelegateCredentials no
#   GSSAPIKeyExchange no
#   GSSAPITrustDNS no
#   BatchMode no
#   CheckHostIP yes
#   AddressFamily any
#   ConnectTimeout 0
#   StrictHostKeyChecking ask
#   IdentityFile ~/.ssh/identity
#   IdentityFile ~/.ssh/id_rsa
#   IdentityFile ~/.ssh/id_dsa
#   Port 22
#   Protocol 2,1
#   Cipher 3des
#   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
#   MACs hmac-md5,hmac-sha1,,hmac-ripemd160
#   EscapeChar ~
#   Tunnel no
#   TunnelDevice any:any
#   PermitLocalCommand no
#   VisualHostKey no
#   ProxyCommand ssh -q -W %h:%p
    SendEnv LANG LC_*
    HashKnownHosts yes
    GSSAPIAuthentication yes
    GSSAPIDelegateCredentials no


Then generate public/private rsa for user mpi and copy public one to authoritzed one's.

sudo adduser mpi
sudo login mpi
cd .ssh
ssh-keygen -t rsa
cat >> authorized_keys

Later at LTSP install mpi user's ~/.ssh directory will copyed to ltsp-client chroot /home/mpi/.ssh
That's due when client is has finished network boot it has image based chroot and after user at
client login's ltsp-client will link user's home directory. While useing mpi we are takeing connection
remotely trough ssh and thus sshserver at client side needs ssh sertificate's and those should be
in ltsp chroot image!!!

To get rid off keypasphere  enter

ssh-add $HOME/.ssh/id_rsa and give passphere once
before launching applications.


Ubuntu default rsh is fake rsh, just link's to ssh package.
sudo apt-get remove rsh
sudo apt-get install rsh-redone-server rsh-redone-client


sudo apt-get install apt-cacher-ng

- fill cache from CD-ROM
- After install http://localhost:3142 you find instructions from there
- create and add following file to server /etc/apt/apt.conf.d/ as well to ltsp image root /opt/ltsp/amd64/etc/apt

sudo gedit /etc/apt/apt.conf.d/02proxy
Acquire::http { Proxy ""; };


Bind9 Installation
IPv6 Sixx'x Reverse Domain

Source Installation's is under construction and you find there:

-build cripts, dir /BX

-general flag's, dir /Bx/flag

-download code script, dir /Bx/download

-changed source file files's, dir /Sx

Basic "compile from sources" compatibility solutions

To get sources to work each other we must standardize communication between libraries and I have done it:
Memory model LP64  int 32bit real 64bit pointers/float 64bit and to get it work we use -m64 whit fortran and metis -DIDXSIZE32, fortran -fdefault-real-8 -fdefault-double-8 dose the work.

Calling convetion, actual name of function at Linker Level should be same and flag -DAdd_  adds one underscore to fort name's and dose the job.

Unfortunately the way gcc treats linking has changed, to get it remember all libraries to bare end of linking sometime's you need to add -Wl,--no-as-needed
as well  -fno-align-commons gives some help.

Ubuntu 12.4 come whit GCC 4.6.3 and GCC 4.6 dose not support Buldozer architecture optimisation it's possible whit GCC 4.7.


Got elmer working first time but only whit mpich2
compiled from sources and ALL mpi realted distribution file's removed!

apt-get install mpich2 mpich2-doc libmpich2-dev libmpich2-3


To find botlenecks you need some statistics from system. CACTI can collect needed information.

sudo apt-get snmpd

sudo apt-get install snmp-mibs-downloader

sudo apt-get cacti

cacti - to manage hardware, scripts needed temp/fan/voltage

LTSP install Links:

ltsp-fat client

sudo gedit /etc/ltsp/ltsp-build-client.conf

# The chroot architecture.



# ubuntu-desktop and edubuntu-desktop are tested.

# If you test with [k|x]ubuntu-desktop, edit this page and mention if it worked OK.

# kubuntu lucid (10.10) working okay.


# Space separated list of programs to install.

# The java plugin installation contained in ubuntu-restricted-extras

# needs some special care, so let's use it as an example.









































# This is needed to answer "yes" to the Java EULA.

# We'll create that file in the next step.


# This uses the server apt cache to speed up downloading.

# This locks the servers dpkg, so you can't use apt on

# the server while building the chroot.



# created to /opt/ltsp

sudo ltsp-build-client

sudo ltsp-chroot -c -p


dpkg-reconfigure tzdata

#local languages

dpkg-reconfigure localeconf

# system softwareapt-get

#apt-get install openmpi-common openmpi-bin

#apt-get install libscotch-dev libopenmpi-dev

apt-get install nfs-client nfs-common samba cifs-utils

apt-get install playonlinux

#powerwake powernap

apt-get install mythtv-frontend

apt-get install gimp darktable

apt-get install mytharchive mythgallery mythgame mythmusic mythnews mythweather

sudo ltsp-chroot -c -p
adduser root
adduser mpi

passwd root

passwd mpi

apt-get update

sudo mkdir /opt/ltsp/amd64/mpi

sudo mkdir /opt/ltsp/amd64/home/mpi

sudo cp -R /home/mpi/.ssh /opt/ltsp/amd64/home/mpi #enviroment

sudo cp /home/mpi/.bashrc /opt/ltsp/amd64/home/mpi
sudo cp /home/mpi/.bash_login /opt/ltsp/amd64/home/mpi

#sudo cp -R /root/.ssh /opt/ltsp/amd64/root

sudo ltsp-chroot -c -p

cd /home

chown -R mpi:mpi mpi

cd /home/mpi

chmod 700 .ssh

chmod 600 .ssh/id*

chmod 644 .ssh/known_hosts

chmod 644 .ssh/authorized_keys

cd /root

chmod 700 .ssh

chmod 600 .ssh/id*

chmod 644 .ssh/known_hosts

chmod 644 .ssh/authorized_keys

mkdir /mpi

mkdir /mpi2

mkdir /mp3

apt-get install quota

apt-get install wine jstest-gtk wine playonlinux ubuntu-restricted-extras mythplugins

apt-get install ltsp-server-standalone

apt-get syslog-ng bacula-fd ntp


sudo ltsp-update-sshkeys

sudo ltsp-update-image -n

sudo ltsp-update-kernels


-diagnostics, apt-get wireshark

# NFS filesystem


/etc/fstab :

/mpi2    /export/mpi2   none    bind  0  0
/mpi3    /export/mpi3   none    bind  0  0

/etc/exports :


service idmapd restart 

#CLIENT info at server

/var/lib/tftpbootp/ltsp/amd64/lts.conf :

 FSTAB_1=" /mpi3 nfs4 _netdev,auto 0 0"
        FSTAB_2=" /mpi nfs4 _netdev,auto 0 0"
        FSTAB_3="// /media/ST1 cifs guest,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0"
        FSTAB_4=" /mpi2 nfs4 _netdev,auto 0 0"

# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0
# / was on /dev/md0 during installation
UUID=24d52ebb-9a49-4949-831a-0702ca032406 /               ext4    errors=remount-ro 0       1
# /boot/efi was on /dev/sda1 during installation
UUID=9AD9-4C15  /boot/efi       vfat    defaults        0       1
# /home was on /dev/md1 during installation
UUID=a9091c8f-31e2-43af-8167-88d8191a4f2c /home           ext4    usrquota,grpquota 0       2
# /var was on /dev/md2 during installation
UUID=3dae13d4-6310-417a-bc67-e987638dbe54 /var            ext4    defaults        0       2
# swap was on /dev/sda5 during installation
UUID=c5de965b-1adc-40e4-ab8b-cddb8bd153c6 none            swap    sw              0       0
# swap was on /dev/sdb5 during installation
UUID=bc109a85-d598-41e4-8805-a8bce1c68aa2 none            swap    sw              0       0
/dev/md127 /mpi3 ext4 rw,nosuid,nodev,uhelper=udisks 0 0 /mpi nfs4 _netdev,auto 0 0
// /media/ST1 cifs guest,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
#/mpi2    /export/mpi2   none    bind  0  0
/dev/sr0 /media/cdrom iso9660,udf user,noauto,exec,utf8,uid=1000,mode=0777 0 0
#//path to your windows share/    /your_mount_path cifs username=your_username,password=your_password,workgroup=your_workgroup,users,auto,user_xattr 0 0 




Links: Failover primary and secondary servers 

FOR   IPv6 you need to run second ISC DHCP deamon whit own configuration for IPv6 more.

Other solution is radvd deamon but I did not get it work whit ISC DHCP serving IPv4 clients.

# check out that isc-dhcp is only dhcp server installed at server

gedit /etc/ltsp/dhcpd.conf


# Default LTSP dhcpd.conf config file.


failover peer "dhcp-failover" {
  primary; # declare this to be the primary server
  port 647;
  peer address;
  peer port 647;
  max-response-delay 30;
  max-unacked-updates 10;
  split 128;
  mclt 3600;
  load balance max seconds 3;

subnet netmask {

pool {
    failover peer "dhcp-failover";
    deny dynamic bootp clients;
    max-lease-time 1800; # 30 minutes

pool {
      max-lease-time 1800; # 30 minutes
    option domain-name "";
#    option domain-name-servers,;
  option domain-name-servers,;
    option broadcast-address;
    option routers;

#    get-lease-hostnames true;
    option subnet-mask;
    option root-path "/opt/ltsp/amd64";

    if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" {
        filename "/ltsp/amd64/pxelinux.0";
    } else {
        filename "/ltsp/amd64/nbi.img";

   allow booting;
   allow bootp;
#subnet ends

#staic lease

  host ugh {
                hardware ethernet  xxxxxxxxxxxxxxxx;
  host kaak {
                hardware ethernet  xxxxxxxxxxxxxxxx;

  host {
                hardware ethernet xxxxxxxxxxxxxxxx;
  host JuuliaHP {
                hardware ethernet  xxxxxxxxxxxxxxxx;
  host     JoniXperiaActiv {
                hardware ethernet  xxxxxxxxxxxxxxxx;
  host     ipadHanna {
                hardware ethernet xxxxxxxxxxxxxxxx;
  host     KINDLEJoni {
                hardware ethernet xxxxxxxxxxxxxxxx;
  host KurrolaStorage1 {
                hardware ethernet  xxxxxxxxxxxxxxxx;
  host paula {
                hardware ethernet  xxxxxxxxxxxxxxxx;
  host EPSONSX420W {
                hardware ethernet  xxxxxxxxxxxxxxxx;
  host mpi1 {
                hardware ethernet  xxxxxxxxxxxxxxxx;



More About HPC. Most work done to compiled at make script distribution .

Needed Calculation libraries shortlist:-





Before production use, one way to to check basic installation is to take part HPCC Speed Chalange

and benchmark your system against others, here is my very first trial out at - mpi4you !

That way you recognize bottle neck's in your system and have realistic understanding what it can do for you.

Compile methods:

a) CMAKE is used whit VTK, HDF5.  , paraview, parmetis, metis

b) make  / ARPACK, MUMPS 4.1, scalapck ( scalapac_installer ), scotch_esmumps

c) ./configure  hypre, mpich, netgen, elmerfem


Links: download and compile, Instructions VTK-Wiki

To get space ball working install vrpn config it for spaceball (uncomment vrpn_3DConnexion_Navigator device0 and comment activated).

sudo apt-get install libudev-dev libusb-dev libfox-dev
sudo apt-get install autotools-dev

git clone git://

cd /mpi3/S2

./configure --prefix=/mpi3/C2/hidapi
sudo make install

cd ..
# download vrpn
# mkdir vrpn-BUILD
# cmake -i ../vrpn
cd vrpn-BUILD
sudo make install

Download Paraview

mkdir /mpi3/S2/ParaView6
cd /mpi3/S2/ParaView6
git clone git:// /mpi3/S2/ParaView6
git checkout master
git pull origin master
git submodule update --init

Run SpaceBall patch  ( Thanks for Cory from this patch ) before compilationt of paraview

git fetch refs/changes/40/10740/3 && git checkout FETCH_HEAD -b add_space_navigator_grab_world_style

cd /mpi3/S2
mkdir ParaView6-BUILD
cd ParaView6-BUILD
cmake -i ../ParaView6

#gedit cmake

make install


Links: Salomé Meca code Aster work's but crashes time to time and
GUI dose not work  correctly.

Salomé Source: Debian 6.0 Squeeze 64bit can be compiled as
binary. Can not build from SOURCES at moment.
Just download InstallWizard, extract it and run install.

Read: VTK & SpaceBall

download Salomé Source: Debian 6.0 Squeeze 64bit
cd /mpi3/S2/InstallWizard_6.6.0_Debian_6.0_64bit

Salomé Source: Debian 6.5 Squeeze 64bit can be compiled as
binary. Can not build from SOURCES at moment.

Prerequisites for binary build:
sudo apt-get install freeglut3
sudo ln -s /usr/lib/ /usr/lib/
sudo ln -s /usr/lib/ /usr/lib/
sudo ln -s /usr/lib/ /usr/lib/

Elmer parraler for Ubuntu 12.4

Compling Elmer parallel
Update ELMER binary for ubuntu 12.4 ppa by Tormod Volden

At system  Elmer is installed RAID0  SDD disk,
elmer will be under /mpi3 directory.

You should install elmer as user:mpi to get it fully
work at cluster.

Elmer binaryes will after installation at /mpi3/elmer/bin and source /mpi3/elmerfem.

Trunk source will be used at directory /mpi3/elmerfem/trunk

To build binaryes:

a) first download all code, libraryes commented out #sudo #svn
at first section of once.

b) do changes to ElmerGUI.pri

c) After that run ./  I use
as storage of infromation to download and all libraryes,...

To start gui use

If you need to compile again do ./ first.

#Load Elmer sources:
mkdir /mpi3
cd /mpi3
mkdir elmer
svn co elmerfem

#pardiso library and licence file to directory elmer/lib
#pardiso available by separate licence from

#Do changes /mpi3/elmerfem/trunk/ElmerGUI/ElmerGUI.pri
#cd /usr/include
#sudo ln -s vtk-5.8/* ./*

gedit /mpi3/elmerfem/trunk/ElmerGUI/ElmerGUI.pri
#                       ElmerGUI: configuration file

# Optional components (undefine or comment out to exclude from compilation):
DEFINES += EG_QWT      # Use QWT for convergence monitor?
DEFINES += EG_VTK      # Use VTK for postprocessing?
DEFINES += EG_MATC     # Use MATC for internal operations in postprocessing?
DEFINES += EG_OCC      # Use OpenCASCADE 6.3 for importing CAD files? Needs VTK.
DEFINES -= EG_PYTHONQT # Use PythonQt for scripting in post processor?

# 64 bit system?
BITS = 64
# Python library:
unix {
   PY_INCLUDEPATH = /usr/include/python3.2
   PY_LIBPATH = /usr/lib
   PY_LIBS = -lpython3.2

unix {
   VTK_INCLUDEPATH = /usr/include/vtk-5.8
   VTK_LIBPATH = /usr/lib
   VTK_LIBS = -lQVTK \
              -lvtkCommon \
              -lvtkDICOMParser \
              -lvtkFiltering \
              -lvtkGenericFiltering \
              -lvtkGraphics \
              -lvtkHybrid \
              -lvtkIO \
              -lvtkImaging \
              -lvtkInfovis \
              -lnetcdf \
              -lvtkRendering \
              -lvtkViews \
              -lvtkVolumeRendering \


run at install directory

cd /mpi3


build script - at first time do uncommented
parts by copy paste at terminal
#!/bin/sh -f
export ElmerMPI_DIR="$( cd "$( dirname "$0" )" && pwd )"

#svn co elmerfem
#sudo apt-get install libvtk5-qt4-dev
cd elmerfem/trunk

# Choose the installation directory and set up the environment:
export ELMER_HOME=$ElmerMPI_DIR/elmer
export ELMER_POST_HOME=$ELMER_HOME/share/elmerpost
export ELMER_FRONT_HOME=$ELMER_HOME/share/elmerfront

#the compilers
export CC=mpicc
export CXX=mpic++
export FC=mpif90
export F77=mpif90

#the compiler flags
export CFLAGS="-I/usr/include/tcl8.5"
export CXXFLAGS="-I/usr/include/tcl8.5"
export FCFLAGS=""
export F77FLAGS=""
export FFLAGS=""
#export LDFLAGS=""

#if PARDISO available
#export LDFLAGS="-L/mpi4/elmer/lib/ -lpardiso412-GNU443-MPI-X86-64 -lgfortran -fopenmp -lpthread -lm"

#-Ofast -ipa -lacml -lfortran -lffio -lacml mv -lmv


# Modules definition and configure options setting:
modules="matc umfpack mathlibs elmergrid meshgen2d eio hutiter fem"
for m in $modules; do
  cd $m
  ./configure --prefix=$ELMER_HOME --with-64bits=yes --with-mpi=yes
  make install
  cd ..

cd post
export CXXFLAGS="-I/usr/include/FTGL -I/usr/include/tcl8.5"
./configure --prefix=$ELMER_HOME
make install
cd ..

cd front
./configure --prefix=$ELMER_HOME
make install
cd ..

cd ElmerGUI
echo "now in ElmerGUI"
cp ./Application/ElmerGUI $ELMERGUI_HOME/ElmerGUI
cp -r ./Application/edf $ELMERGUI_HOME

cd ..

======== START UP ElmerGUI  ==========
#!/bin/sh -f
export ElmerMPI_DIR="$( cd "$( dirname "$0" )" && pwd )"

export ELMER_HOME=$ElmerMPI_DIR/elmer



#!/bin/sh -f

cd elmerfem/trunk

# Modules definition and configure options setting:
modules="matc umfpack mathlibs elmergrid meshgen2d eio hutiter fem"
for m in $modules; do
  cd $m
    make clean
  cd ..

cd front
make clean
cd ..

cd post
make clean
cd ..

cd ElmerGUI
make clean

cd Application
make clean
cd ..

#All ok?

cd elmer/bin

MAIN: =============================================================

MAIN: ElmerSolver finite element software, Welcome!

MAIN: This program is free software licensed under (L)GPL

MAIN: Copyright 1st April 1995 - , CSC - IT Center for Science Ltd.

MAIN: Webpage, Email

MAIN: Library version: 7.0 (Rev: 5825)

MAIN: Running in parallel using 18 tasks.

MAIN: HYPRE library linked in.

MAIN: MUMPS library linked in.

MAIN: PARDISO library linked in.

MAIN: =============================================================

cd /mpi3/elmerfem/trunk/fem/tests

joni@mpi1:/mpi3/elmerfem/trunk/fem/tests$ ./runtests
$ELMER_HOME undefined, setting it to ../src
test 1 :                   1dtests         [PASSED], CPU time=0.06
test 2 :                   1sttime         [PASSED], CPU time=0.43
test 3 :                   2ndtime         [PASSED], CPU time=0.74
test 4 :               adaptivity1         [PASSED], CPU time=0.98
test 5 :               adaptivity2         [PASSED], CPU time=1.19
test 6 :               adaptivity3         [PASSED], CPU time=1.74
test 7 :               adaptivity4         [PASSED], CPU time=2
test 8 :               adaptivity5         [PASSED], CPU time=2.22
test 9 :                 adv_diff1         [PASSED], CPU time=2.82
test 10 :                 adv_diff2         [PASSED], CPU time=3.46
test 11 :                 adv_diff3         [PASSED], CPU time=5.68
test 12 :                 adv_diff4         [PASSED], CPU time=9.11
test 13 :            AdvDiffHandles         [PASSED], CPU time=9.33
test 14 :           AdvDiffHandles2         [PASSED], CPU time=10.25
test 15 :           AdvDiffHandles3         [PASSED], CPU time=10.46
test 16 :                AdvReactDG         [PASSED], CPU time=10.81
test 17 :              AdvReactDG_P         [PASSED], CPU time=11.24
test 18 :                    amultg         [PASSED], CPU time=12.36
test 19 :                   amultg2         [PASSED], CPU time=12.79
test 20 :              beam-springs         [PASSED], CPU time=13.64
test 21 :                 bentonite         [PASSED], CPU time=13.72
test 22 :            BlockLinElast1         [PASSED], CPU time=14.04
test 23 :            BlockLinElast2         [PASSED], CPU time=14.18
test 24 :           BlockLinElast2b         [PASSED], CPU time=14.31
test 25 :            BlockLinElast3         [PASSED], CPU time=15.79
test 26 :             BlockPoisson1         [PASSED], CPU time=15.85
test 27 :             BlockPoisson2         [PASSED], CPU time=15.99
test 28 :             BlockPoisson3         [PASSED], CPU time=16.28
test 29 :                   bodydir         [PASSED], CPU time=16.37
test 30 :                  bodydir2         [PASSED], CPU time=20.36
test 31 :                  bodyload         [PASSED], CPU time=20.44
test 32 :                  buckling         [PASSED], CPU time=24.36
test 33 :         CapacitanceMatrix         [PASSED], CPU time=24.54
test 34 :                 CavityLid         [PASSED], CPU time=25.04
test 35 :                CavityLid2         [PASSED], CPU time=28.68
test 36 :               channel_v2f         [PASSED], CPU time=34.9
test 37 :                    cmultg         [PASSED], CPU time=38.89
test 38 :                   coating         [PASSED], CPU time=42.06
test 39 :         CoordinateScaling         [PASSED], CPU time=42.14
test 40 :           CoupledPoisson1         [PASSED], CPU time=42.21
test 41 :          CoupledPoisson1b         [PASSED], CPU time=42.29
test 42 :           CoupledPoisson2         [PASSED], CPU time=43.58
test 43 :          CoupledPoisson2b         [PASSED], CPU time=45
test 44 :           CoupledPoisson3         [PASSED], CPU time=45.08
test 45 :           CoupledPoisson4         [PASSED], CPU time=45.15
test 46 :           CoupledPoisson5         [PASSED], CPU time=45.25
test 47 :           CoupledPoisson6         [PASSED], CPU time=45.32
test 48 :           CoupledPoisson7         [PASSED], CPU time=45.4
test 49 :           CoupledPoisson8         [PASSED], CPU time=45.48
test 50 :           CoupledPoisson9         [PASSED], CPU time=45.56
test 51 :                   current         [PASSED], CPU time=45.76
test 52 :      current_heat_control         [PASSED], CPU time=46.2
test 53 :           CurvedBndryPFEM         [PASSED], CPU time=46.26
test 54 :                 dft-water         [PASSED], CPU time=46.26
test 55 :               diffuser_sa         [PASSED], CPU time=54.64
test 56 :              diffuser_sst         [PASSED], CPU time=60.6
test 57 :              diffuser_v2f         [PASSED], CPU time=75.99
test 58 :      DivergenceAnalytic2D         [PASSED], CPU time=76.87
test 59 :             el_adaptivity         [PASSED], CPU time=77.01
test 60 :         ElastElstat1DBeam         [PASSED], CPU time=77.16
test 61 :           ElastElstatBeam         [PASSED], CPU time=77.46
test 62 :                elasticity         [PASSED], CPU time=77.99
test 63 :        ElasticLubrication         [PASSED], CPU time=80.55
test 64 :                ElastPelem         [PASSED], CPU time=85.2
test 65 :           Electrokinetics         [PASSED], CPU time=97.87
test 66 :                    elstat         [PASSED], CPU time=102.1
test 67 :              elstat_infty         [PASSED], CPU time=105.89
test 68 :             elstat_source         [PASSED], CPU time=106.03
test 69 :     ExtrusionStructured2D         [PASSED], CPU time=108.68
test 70 :             FlowResNoslip         [PASSED], CPU time=117.48
test 71 :               FlowResSlip         [PASSED], CPU time=126.22
test 72 :                fluxsolver         [PASSED], CPU time=126.31
test 73 :               fluxsolver2         [PASSED], CPU time=126.39
test 74 :               fluxsolver3         [PASSED], CPU time=126.69
test 75 :                  freesurf         [PASSED], CPU time=127.4
test 76 :              freesurf_axi         [PASSED], CPU time=128.71
test 77 :              freesurf_int         [PASSED], CPU time=129.73
test 78 :              freesurf_ltd         [PASSED], CPU time=138.1
test 79 :                  fsi_beam         [PASSED], CPU time=139.49
test 80 :         fsi_beam_optimize         [PASSED], CPU time=146.08
test 81 :                   fsi_box         [PASSED], CPU time=148.21
test 82 :                  fsi_box2         [PASSED], CPU time=149.63
test 83 :                 geomstiff         [PASSED], CPU time=149.87
test 84 :                    gmultg         [PASSED], CPU time=150.16
test 85 :               HeatControl         [PASSED], CPU time=150.23
test 86 :              HeatControl2         [PASSED], CPU time=150.37
test 87 :                    heateq         [PASSED], CPU time=150.74
test 88 :               heateq_bdf2         [PASSED], CPU time=150.97
test 89 :               heateq_bdf3         [PASSED], CPU time=151.19
test 90 :            heateq_newmark         [PASSED], CPU time=151.35
test 91 :     heateq_newmark_global         [PASSED], CPU time=151.5
test 92 :                heateq-par         [PASSED], CPU time=151.59
test 93 :                   HeatGap         [PASSED], CPU time=151.69
test 94 :              HelmholtzBEM         [PASSED], CPU time=152.35
test 95 :             HelmholtzEdge         [PASSED], CPU time=152.64
test 96 :             HelmholtzFace         [PASSED], CPU time=152.94
test 97 :              HelmholtzFEM         [PASSED], CPU time=153.15
test 98 :       HelmholtzPlaneWaves         [PASSED], CPU time=153.66
test 99 :   HelmholtzPlaneWavesAxis         [PASSED], CPU time=154.2
test 100 :        HelmholtzStructure         [PASSED], CPU time=154.85
test 101 :       HelmholtzStructure2         [PASSED], CPU time=155.33
test 102 :       HelmholtzStructure3         [PASSED], CPU time=156.88
test 103 :          InductionHeating         [PASSED], CPU time=157.36
test 104 :         InductionHeating2         [PASSED], CPU time=158.15
test 105 :                    L2norm         [PASSED], CPU time=160.29
test 106 :                 levelset1         [PASSED], CPU time=165.11
test 107 :                 levelset2         [PASSED], CPU time=167.31
test 108 :                 levelset3         [PASSED], CPU time=174.16
test 109 :                levelset3b         [PASSED], CPU time=180.73
test 110 :         LimitDisplacement         [PASSED], CPU time=184.43
test 111 :          LimitTemperature         [PASSED], CPU time=189.43
test 112 :         LimitTemperature2         [PASSED], CPU time=192.22
test 113 :             linearsolvers         [PASSED], CPU time=192.45
test 114 :       linearsolvers_cmplx         [PASSED], CPU time=193.49
test 115 :     LubricationTunedForce         [PASSED], CPU time=194.21
test 116 :                 marangoni         [PASSED], CPU time=194.34
test 117 :         MeshRefineGrading         [PASSED], CPU time=197.73
test 118 :              mgdyn_3phase         [PASSED], CPU time=201.81
test 119 :                  mgdyn_bh         [PASSED], CPU time=232.21
test 120 :            mgdyn_harmonic         [PASSED], CPU time=246.3
test 121 :              mgdyn_steady         [PASSED], CPU time=256.66
test 122 :               mgdyn_torus         [PASSED], CPU time=262.11
test 123 :      mgdyn_torus_harmonic         [PASSED], CPU time=269.4
test 124 :           mgdyn_transient         [PASSED], CPU time=291.99
test 125 :                       mhd         [PASSED], CPU time=294.85
test 126 :                      mhd2         [PASSED], CPU time=298.2
test 127 :                 multimesh         [PASSED], CPU time=298.52
test 128 :         NaturalConvection         [PASSED], CPU time=304.29
test 129 :   NonnewtonianChannelFlow         [PASSED], CPU time=305
test 130 :                   normals         [PASSED], CPU time=305.07
test 131 :        NormalTangentialBC         [PASSED], CPU time=307.2
test 132 : OptimizeSimplexFourHeaters         [PASSED], CPU time=311.06
test 133 :          ParticleAdvector         [PASSED], CPU time=312.96
test 134 :         ParticleAdvector2         [PASSED], CPU time=316.34
test 135 :         ParticleAdvector3         [PASSED], CPU time=323.75
test 136 :   ParticleAdvectorZalesak         [PASSED], CPU time=328.92
test 137 :           ParticleFalling         [PASSED], CPU time=329.07
test 138 :           ParticleHeating         [PASSED], CPU time=339.2
test 139 :                   passive         [PASSED], CPU time=339.92
test 140 :                 periodic1         [PASSED], CPU time=340.04
test 141 :                 periodic2         [PASSED], CPU time=340.17
test 142 :                 periodic3         [PASSED], CPU time=340.25
test 143 :         periodic_explicit         [PASSED], CPU time=341.49
test 144 :    periodic_nonconforming         [PASSED], CPU time=341.59
test 145 :           periodic_offset         [PASSED], CPU time=341.7
test 146 :          periodic_offset2         [PASSED], CPU time=342.42
test 147 :              periodic_rot         [PASSED], CPU time=342.53
test 148 :               PhaseChange         [PASSED], CPU time=346.09
test 149 :              PhaseChange2         [PASSED], CPU time=346.94
test 150 :              PhaseChange3         [PASSED], CPU time=347.83
test 151 :                     piezo         [PASSED], CPU time=347.97
test 152 :                    plates         [PASSED], CPU time=348.07
test 153 :                    pmultg         [PASSED], CPU time=349.36
test 154 :                  pointdir         [PASSED], CPU time=349.43
test 155 :                 pointload         [PASSED], CPU time=349.51
test 156 :                pointload2         [PASSED], CPU time=351.69
test 157 :                PoissonBEM         [PASSED], CPU time=351.79
test 158 :          PoissonBoltzmann         [PASSED], CPU time=356.47
test 159 :                 PoissonDG         [PASSED], CPU time=356.66
test 160 :               PoissonPFEM         [PASSED], CPU time=356.74
test 161 :                PorousPipe         [PASSED], CPU time=357.54
test 162 :                      Q1Q0         [PASSED], CPU time=357.7
test 163 :                 radiation         [PASSED], CPU time=357.9
test 164 :                radiation2         [PASSED], CPU time=358.09
test 165 :               radiation2d         [PASSED], CPU time=360.01
test 166 :               radiation3d         [PASSED], CPU time=364.17
test 167 :                    reload         [PASSED], CPU time=364.97
test 168 :                 reynolds1         [PASSED], CPU time=367.86
test 169 :                 reynolds2         [PASSED], CPU time=368.55
test 170 :                 reynolds3         [PASSED], CPU time=373.73
test 171 :                reynolds3b         [PASSED], CPU time=378.24
test 172 :                  rgdblock         [PASSED], CPU time=380.04
test 173 :              RichardsDyke         [PASSED], CPU time=384.67
test 174 :             RichardsDyke2         [PASSED], CPU time=394.69
test 175 :          RigidMeshMapper1         [PASSED], CPU time=395.36
test 176 :          RigidMeshMapper2         [PASSED], CPU time=395.73
test 177 :                 rot_aniso         [PASSED], CPU time=399.02
test 178 :              RotatingFlow         [PASSED], CPU time=399.8
test 179 :                   rotflow         [PASSED], CPU time=400.55
test 180 :               savescalars         [PASSED], CPU time=400.84
test 181 :      savescalars_boundary         [PASSED], CPU time=400.96
test 182 :          savescalars_flux         [PASSED], CPU time=401.05
test 183 :            ShallowWaterNS         [PASSED], CPU time=403.49
test 184 :                     shell         [PASSED], CPU time=403.71
test 185 :                    shell2         [PASSED], CPU time=404.31
test 186 :                staged_sim         [PASSED], CPU time=405.64
test 187 :                   Step_ke         [PASSED], CPU time=407.57
test 188 :                   Step_ns         [PASSED], CPU time=408
test 189 :                   Step_sa         [PASSED], CPU time=416.5
test 190 :            Step_sst-kw-wf         [PASSED], CPU time=434.12
test 191 :               Step_stokes         [PASSED], CPU time=434.24
test 192 :         Step_stokes_block         [PASSED], CPU time=434.45
test 193 :                  Step_v2f         [PASSED], CPU time=458.62
test 194 :                StokesPFEM         [PASSED], CPU time=458.75
test 195 :                StokesProj         [PASSED], CPU time=459.18
test 196 :       StrainCalculation01         [PASSED], CPU time=459.46
test 197 :       StrainCalculation02         [PASSED], CPU time=465.41
test 198 :       StrainCalculation03         [PASSED], CPU time=472.1
test 199 :               streamlines         [PASSED], CPU time=472.44
test 200 :                    stress         [PASSED], CPU time=472.74
test 201 :                 structmap         [PASSED], CPU time=472.85
test 202 :                structmap2         [PASSED], CPU time=474.1
test 203 :                structmap3         [PASSED], CPU time=486.62
test 204 :           ThermalActuator         [PASSED], CPU time=551.68
test 205 :            ThermalBiMetal         [PASSED], CPU time=551.84
test 206 :           ThermalBiMetal2         [PASSED], CPU time=552.04
test 207 :           ThermalCompress         [PASSED], CPU time=552.89
test 208 :            ThermoElectric         [PASSED], CPU time=553.02
test 209 :                 TimeAdapt         [PASSED], CPU time=555.13
test 210 :                  TimeFunc         [PASSED], CPU time=557.59
test 211 :                    tresca         [PASSED], CPU time=557.76
test 212 :                  vortex2d         [PASSED], CPU time=558.9
test 213 :                  vortex3d         [PASSED], CPU time=560.19
test 214 :                   WaveEqu         [PASSED], CPU time=560.33
test 215 :         WeightComputation         [PASSED], CPU time=560.41
Tests completed, passed: 215 out of total 215 tests
Cumulative CPU time used in test: 560.41 s

Netgen parraler

Source: netgen-paraler

apt-get libtogl1 libtogl-dev "tcl" "tix" "occ" "jpeglib" "ffmpeglib"
./configure --enable-parallel  --with-tcl=/usr/lib/tcl8.5 --with-tk=/usr/lib/tk8.5 --with-occ=/usr/lib/ --with-togl=/usr/lib --enable-jpeglib --enable-ffmpeg

build instruction:

Google Group Ubuntu Parraler

Joni-Pekka Kurronen,
Nov 14, 2012, 3:37 AM