Recent site activity

Home‎ > ‎High Performance Computing‎ > ‎

HPC Cluster Hardware

The ITS high performance computing cluster currently consists of 187 compute nodes with Intel Xeon EM64T processors. There are several different node types:

7 Dell PowerEdge 1950 nodes with two 2.3 GHz quad-core "Clovertown" CPUs, 12 Gbytes of main memory, and 146 Gbyte SAS hard drive

64 Dell PowerEdge 1950 nodes with two 3.0 GHz quad-core "Harpertown" CPUs, 16 Gbytes of main memory, and a 146 Gbyte SAS hard drive

42 Dell PowerEdge 1950 nodes with two 3.0 GHz dual-core "Woodcrest" CPUs, 8 Gbytes of main memory, and a 146 Gbyte SAS hard drive

1 Dell PowerEdge 6850 node with four 3.0 GHz dual-core "Paxville" CPUs, 32 Gbytes of main memory, and mirrored 146 Gbyte SCSI hard drives

1 Dell PowerEdge R910 node with four 2.0 GHz octa-core "Nehalem" CPUs, 256 Gbytes of main memory, and an array of four 300 Gbyte SCSI hard drives

72 Dell PowerEdge R410 nodes with two 2.66 GHz six-core "Westmere" CPUs, 24 Gbytes of main memory, and a 300 Gbyte SAS hard drive


In the past, the cluster has contained some older compute nodes:

    18 Dell PowerEdge 1850 nodes with two 3.2 GHz "Nocona" CPUs, 4 Gbytes of main memory, and a 73 Gbyte SCSI hard drive (removed July 2010)

    24 Dell PowerEdge 1850 nodes with two 3.8 GHz "Irwindale" CPUs, 4 Gbytes of main memory, and a 73 Gbyte SCSI hard drive (removed July 2010)

All nodes are interconnected with Gigabit Ethernet for MPI message passing. There is a login node (hpcc-login.case.edu) used for user logins, compiling, testing, and submitting batch jobs.  There is a management node used for cluster management purposes and to hold documentation. There is a master node used for cluster management purposes.  There are three storage nodes that provide centralized disk storage for home directories and other directories that are shared among all nodes in the cluster.  All nodes are also interconnected by a separate Ethernet for the purpose of out-of-band cluster management.

During the summer of 2010 additional changes are being made to the hardware configuration.  In particular, 64 additional six-cores nodes are being added, the PowerEdge 1850 nodes are being retired, approximately 30 Terabytes of global scratch storage are being added, and the interconnect is being upgraded to 10-Gigabit Ethernet.

Comments