Linux HPC Systems

Linux High Performance Computing Clusters

A High Performance Computing  (HPC) cluster is collection of computers networked together that can be used as a single parallel computing system. HPC is intended for programs that require parallel processing (typically) with Message Passing Interface (MPI).  HPC clusters can provide a massive performance increase with a much lower hardware cost than a traditional symmetric multiprocessing system.  

Parallel computing is heavily used in research domains such as weather forecasting, phylogenetic tree building, computational fluid dynamics modelling, and molecular interaction simulation.

The HPC cluster can also be used for traditional, non-parallelized applications that employ single-threaded and multi-threaded processes like MATLAB, R, and other standard scientific applications.  Clusters can be used to run single isolated processes on a compute node or many copies of the same process (for instance, for the purpose of statistical validation).

OIT RC is working with PSU researchers to increase the number and variety of nodes available for tackling many data and computationally intensive problems.  This includes Intel Xeon and AMD x86_64 processors and Nvidia GPUs.

There is currently one HPC cluster available to PSU faculty, staff, students, and their collaborators who are interested in running existing applications or developing new parallel code. 

Coeus HPC Cluster

Coeus /ˈsiːəs/ (Ancient Greek: Κοῖος, Koios, "query, questioning") was the ancient Greek god of intellect, and Titan, representing the inquisitive mind and the personification of our research and educational goals.  Coeus is a general purpose High Performance Computing (HPC) cluster with significant resources to address a broad range of computational tasks.  It is the cornerstone of the PSU computational infrastructure to support PSU computational science efforts and the work of the Portland Institute for Computational Science (PICS). 

The Coeus cluster was made possible through support from the National Science Foundation (NSF), the Army Research Office (ARO), and Portland State University.  This cluster was provided by and built by Advanced Cluster Technologies.  

It has an estimated peak performance of over 110 TFLOPs, Intel Omni-Path high performance networking, and approximately 1.7 PiB computational scratch storage

Using Coeus HPC Cluster

More information about the Coeus HPC

Coeus Specifications:

Two login nodes and two management nodes (head nodes) for hosting cluster management software and the system scheduler (SLURM)

Data Transfer Node to support high-bandwidth data transfers 

Panasas storage appliance hosting scratch storage mounted in each node over 100 Gb ethernet

Intel Omni-Path high-performance (up to 100 Gbps) network fabric 


128 compute nodes each with 20 cores and 128 GB RAM 

12  Intel Phi processor nodes each with 64 cores and 96 GB RAM 

2 large-memory compute nodes each with 24 cores, 756 GB RAM, and 1 Nvidia V100 GPU

10 GPU nodes each with 32 cores, 128 GB RAM (minimum), and 4 Nvidia GPUs