HPC Home‎ > ‎

Software Guide

Software Guide helps you to locate the software you are looking for and have a feel of it by running sample jobs on HPC System.
  • To get started, click on the link that applies to you under the Section "Compiler and Libraries" and "Applications" . 
  • The job scripts with PBS directives are used to submit jobs requesting for HPC resources. 
  • You are restricted by access policies for resources available to you. 
  • In case of specific software or most recent version, you can install it on your home directory. Check the Software Installation GuideIf it is a specific software and you are having trouble installing it after following the instructions at Software Installation Guide, please contact us at 
  • If the Software available in HPC is the licensed one (see Software Guide), you need to explicitly request for it via email 
  • To maintain the licensed software for you, you would need to contact the vendor and find out about the software and its feasibility on the HPC (license model and control). We would ask that your PI would get the HPC membership (see  HPC access policy).

Important Notes

  • (Imp) Some of the Software may only be available in SLURM cluster (e.g. hpctest). For the SLURM cluster (hpctest), we currently don't have SLURM script for individual Software. Please visit HPC transitioning to SLURM GUIDE. This is the work in progress.
  • To get the versions of the software installed in HPC, use the command "module avail". To check the dependency modules such as cmake, boost etc, load the depends module first "module load depends"
  • The path of the executable and libraries of the <software> can be obtained using the command "module display <software>"
  • The default binaries/executable (compiler or software) are available at /usr/bin and /usr/local/bin
  • The default libraries are available at /usr/lib, /usr/lib64, /usr/local/lib, /usr/local/lib64
  • Note that he name of the executable may have been changed in the latest version of Software. You can check the path /usr/local/<software>/bin. For e.g. for GROMACS:

    ls /usr/local/gromacs/bin/
    grompp_mpi pdb2gmx_mpi ....
  • If you are using large data-file for your jobs, please avoid copying it to the compute nodes for each job that you run. Otherwise this will increase the network traffic in the cluster. To know more about this, please contact the admin about how to run jobs with large data. Also, if your job uses large amount of resources (processors and/or memory), kindly refer to HPC FAQ and contact us at 
  • License is needed to run the applications that have * (Asterisk) behind them. Please verify by clicking on the link to each specific application.
  • Applications displayed in italics are 'Not fully-supported Software'. The software has been installed on the cluster and seems to work in the small sample tests that we conducted, but has not been confirmed to fully work for wide variety of tests on the cluster.

Compilers and Libraries

Compiler Suites

GNU, Intel (includes MKL on SLURM cluster)*, PGI*, Java, CodeSynthesisXSD


Intel MKL (only on hpclogin)*, GNU GSL, fftw, hdf5, LAPACK, SCALAPACKBLAS, BOOST, CULA, CLHEPPBS-DRMAA, MAGMAArmadillo, FFmpeg, OpenCV, VORO++, APR/APR-util, ncursesMPFR, GStreamer, libxclibxml2 & libxslt

MPI Versions


Distributed Memory Frameworks

Global Arrays (work in progress...)



MATLAB*, Mathematica*, R , RStudioJuliaNumPy/SciPyTorch

Fluid Dynamics


Molecular Modeling

NAMD, GROMACSLAMMPS, CHARMM/Q-CHEM*, Schrodinger*, Gaussian*, VASP* , ABINIT,  Amber*,  BigDFTatompawSiesta, VASPTools

Visualization Software

Gnuplot, Paraview, VMD, Visit, VTKUCSF Chimera, ImageMagickCircos, GraphicsMagickRelion, EMAN2
Subpages (111): View All