CAMM‎ > ‎Resources‎ > ‎Hardware‎ > ‎


Fermi is a x86_64 GNU/Linux cluster running RHEL6. It contains 32 nodes, each node containing 16-core AMD Opteron(TM) Processor and 64GB RAM.


uname -a
Linux fermi-login1 2.6.32-279.2.1.el6.x86_64 #1 SMP Thu Jul 5 21:08:58 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux.

processor       : 15
vendor_id       : AuthenticAMD
cpu family      : 21
model           : 1
model name      : AMD Opteron(TM) Processor 6212                
stepping        : 2
cpu MHz         : 2600.147
cache size      : 2048 KB
physical id     : 1
siblings        : 8
core id         : 3
cpu cores       : 4
apicid          : 71
initial apicid  : 39
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nonstop_tsc extd_apicid amd_dcm aperfmperf pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 nodeid_msr topoext perfctr_core cpb npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
bogomips        : 5199.26
TLB size        : 1536 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate cpb

cat /proc/meminfo | head -n1
MemTotal:       66087276 kB

Batch queue policy

Currently there is only a 'batch' queue with FIFO - backfill. To submit simply issue:
qsub my_batch_script.pbs

Software available for CAMM users in Fermi

What follows is the list of compiled software under /sw


EPW is the short name for "Electron-phonon Wannier". EPW is an open-source F90/MPI code which calculates properties related to the electron-phonon interaction using Density-Functional Perturbation Theory and Maximally Localized Wannier Functions.
  • module load epw


NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems
#PBS -N apoa1
#PBS -j oe
#PBS -l walltime=1:00:00
#PBS -l nodes=4:ppn=16

#We have requested 4 nodes and 16cores per node

# Load module for namd
source $MODULESHOME/init/bash
module load namd

cd /sw/fermi/namd/NAMD_2.9_Source/Linux-x86_64-g++/test/apoa1
mpirun -np 64 namd2 apoa1.namd
  • Benchmark of namd 2.9 in Fermi
System: cubic box containing 92224 atoms, pre-equilibrated
Simulation: 2ps job in the NVT ensemble (T=300K)
- 1 node only, up to 16 cores
  num.cores   time(s)
      1       3157
      2       1715
      4        953
      8        550
     16        425

- 1 core per node only, up to 16 nodes
  num.proc    time(s)
     1        3157
     2        1711
     4         950
     8         548
    16         401


LAMMPS is a classical molecular dynamics simulation code designed to run efficiently on parallel computers. It was developed at Sandia National Laboratories, a US Department of Energy facility, with funding from the DOE. It is an open-source code, distributed freely under the terms of the GNU Public License (GPL).
  • module load lammps # loads the environment for the latest LAMMPS installation
  • module avail lammps # lists available versions

Example PBS script:

#PBS -N jobname
#PBS -j oe
#PBS -l walltime=1:00:00
#PBS -l nodes=4:ppn=16

#We have requested 4 nodes and 16cores per node

# Load module for namd
source $MODULESHOME/init/bash
module load lammps

mpirun -np 64 lmp_fermi -in in.myinputfile  # Do not use "<" in place of "-in"