Distributive and Parallel Computing

BlueRidge (Sandy Bridge)

Virginia Tech ARC launches a new large-scale machine, BlueRidge. BlueRidge is comprised of 318 Intel Sandy Bridge nodes. With a total of 5,088 cores and 20 TB of memory, BlueRidge is ARC's largest research computing system to date. (See here for a comparison of current ARC resources.)

In order to promote responsible use of the system, core-hours on BlueRidge will be disbursed using an allocation system (similar to that used on System X). The allocation system has been tested for Linux, Mac OSX and Windows operating systems. Firefox is the preferred browser. It has also been tested for Safari, Opera and Chrome. Once you have submitted an allocation request, it will be reviewed by committee. Be sure to upload pertinent grants and publications, particularly if you are requesting a larger allocation. Allocations are intended to be renewed annually.

HokieOne (SGI UV)

HokieOne is a shared-memory SGI UV system that became available to Virginia Tech researchers in April 2012, replacing the Inferno/Cauldron SGI Altix systems. It has 492 2.66Ghz Intel Xeon cores (on 82 sockets on 41 blades) with 2.62TB of memory (5.3 GB/core). Because it is a shared-memory machine, it acts from the user perspective as if it were a single large node (hence the name "HokieOne"). This means that users running pure shared-memory applications (e.g. OpenMP rather than MPI or hybrid code), are not restricted to working on a single node (i.e. a small portion of the total machine). It also allows users to request whatever chunk of memory that they need when running a job, making HokieOne ideally suited for applications that require large memory shares.

Athena (CPU-GPU)

Athena is a cluster system with GPUs and large RAM memory footprint running CentOS Linux 5. There are 42 quad-socket, AMD 2.3GHz Magny Cour octa-core nodes (1,344 cores) with 64 GB RAM each (12.4 TFLOP peak). Sixteen of the nodes also have access to 8 total nVidia S2050 Fermi (quad-core) GPUs with 6GB of memory. These new GPUs support OpenGL and also single and double precision operations (96 TFLOP single precision peak) and C++ and FORTAN compilers (PGI). The nodes are connected via Quad-data-rate (QDR) InfiniBand (40 Gb/sec) and there are 40 TB of file storage attached to this device. James River Technical/Appro supplied the system. This machine is intended for computation and visualization of large data sets. This is a unique machine with its large memory/node footprint, which is crucial for managing large timeseries data and global/serial statistical operations. This system directly enables our computational scientists/engineers to tackle bigger problems.

The GPUs in this cluster are primarily to accelerate rendering tasks (drawing high- resolution plots, animations, 3D and video). For more information, see the Visualization section.

Ithaca (IBM iDataPlex)

Ithaca is an IBM iDataPlex system running SUSE Linux 11 that provides Virginia Tech researchers with access to high performance computing on a standard x86-Linux environment. This system supports commercial software packages such as MATLAB, Fluent, and other third-party software. The system is partitioned to provide a Parallel MATLAB compute cluster, general research compute cluster and resource for other Institutional projects.

Ithaca provides 672 cores with a total peak performance of 6 TeraFLOPS. The system has 84 nodes, with 66 nodes available for general use. Each node has dual-socket quad-core 2.26 GHz Intel Nehalem processors. Ten nodes have 48 GB memory and the remainder have 24 GB. Nodes are connected via Quad-Data-Rate InfiniBand (40 Gb/sec).