SELF-Fluids source code is available on bitbucket from fluidnumerics.
$ git clone https://github.com/FluidNumerics/SELF -b stale/self-fluids
If you do not have access to the SELF-Fluids Singularity container, no worries! You can still build SELF-Fluids using the autoconf build system!
SELF-Fluids depends on
A Fortran 2008 Compiler ( Currently tested with GNU and PGI compilers )
For information on building HDF5 with the PGI compilers and Fortran support, see the Fluid Numerics HDF5 build documentation.
To build SELF-Fluids to run as a serial process on a CPU, simply execute
$ ./configure \
--with-metis=/path/to/libmetis.a \
--with-hdf5=/path/to/h5cc \
--prefix=/path/to/install
$ make
$ make install
making the appropriate substitutions for the paths to libmetis.a, h5cc, and the directory where you want SELF-Fluids installed.
This will install a serial CPU version of SELF-Fluids in /path/to/install with the following install tree
bin/sfluid
lib/libself.a
include/*.mod
The sfluid binary can be used to run the structured mesh generator, initial condition generator, and the DGSEM Compressible Navier Stokes solver.
libself.a and the include directory can be used to build your own spectral element applications using the Spectral Element Libraries in Fortran.
To build SELF-Fluids with GPU acceleration (single-GPU), you must use the PGI compilers. Currently, the PGI Fortran compiler is the only Fortran compiler with CUDA-Fortran support. To enable cuda support, simply execute
$ ./configure \
--enable-cuda \
--with-metis=/path/to/libmetis.a \
--with-hdf5=/path/to/h5cc \
--prefix=/path/to/install
$ make
$ make install
To build SELF-Fluids with MPI support for data parallelism, you must have a working install of OpenMPI or MVAPICH and a parallel HDF5 build to support parallel IO.
$ ./configure \
--enable-mpi \
--with-metis=/path/to/libmetis.a \
--with-hdf5=/path/to/h5pcc \
--prefix=/path/to/install
$ make
$ make install
To build SELF-Fluids with multi-GPU support for data parallelism and GPU acceleration, you must have a working install of OpenMPI or MVAPICH and a parallel HDF5 build to support parallel IO. Additionally, you must use the PGI compilers. Currently, the PGI Fortran compiler is the only Fortran compiler with CUDA-Fortran support.
$ ./configure \
--enable-mpi \
--enable-cuda \
--with-metis=/path/to/libmetis.a \
--with-hdf5=/path/to/h5pcc \
--prefix=/path/to/install
$ make
$ make install
The following options are available for various builds of SELF-Fluids
--enable-tecplot This option turns on ASCII tecplot output. The tecplot output contains the mesh and model state variables at visualization point. Prior to writing to the tecplot files, an interpolation routine is called to interpolate from Legendre Gauss Quadrature points within each element, to a set of points uniform in computational space.
--enable-diagnostics This option turns on globally integrated diagnostics, including Total Potential Energy, Total Kinetic Energy, Total Heat Content, Total Mass, and Total Volume. The diagnostics are written to screen during execution when file IO is conducted and written to a file, diagnostics.curve.
--enable-double This option turns on double precision floating point arithmetic. By default, SELF-Fluids is built with single precision floating point arithmetic.
The examples/ subdirectory contains a number of examples that SELF-Fluids regularly tests against. Each directory contains a runtime.params namelist file and a self.equations equations file. With sfluid in your PATH,
$ cd examples/thermalbubble
$ sfluid
If this is your first time running the sfluid program, it will create a structured mesh according to the size specified in runtime.params, set the initial conditions according the specified fluid state in self.equations, and integrate the model to the specified end time in runtime.params.
Congratulations you just ran your first fluids code!