Linux HPC Systems
Linux High Performance Computing Clusters
A High Performance Computing (HPC) cluster is collection of computers networked together that can be used as a single parallel computing system. HPC is intended for programs that require parallel processing (typically) with Message Passing Interface (MPI). HPC clusters can provide a massive performance increase with a much lower hardware cost than a traditional symmetric multiprocessing system.
Parallel computing is heavily used in research domains such as weather forecasting, phylogenetic tree building, computational fluid dynamics modelling, and molecular interaction simulation.
The HPC cluster can also be used for traditional, non-parallelized applications that employ single-threaded and multi-threaded processes like MATLAB, R, and other standard scientific applications. Clusters can be used to run single isolated processes on a compute node or many copies of the same process (for instance, for the purpose of statistical validation).
OIT RC is working with PSU researchers to increase the number and variety of nodes available for tackling many data and computationally intensive problems. This includes Intel Xeon and AMD x86_64 processors and Nvidia GPUs.
There is currently one HPC cluster available to PSU faculty, staff, students, and their collaborators who are interested in running existing applications or developing new parallel code.
Coeus HPC Cluster
Coeus /ˈsiːəs/ (Ancient Greek: Κοῖος, Koios, "query, questioning") was the ancient Greek god of intellect, and Titan, representing the inquisitive mind and the personification of our research and educational goals. Coeus is a general purpose High Performance Computing (HPC) cluster with significant resources to address a broad range of computational tasks. It is the cornerstone of the PSU computational infrastructure to support PSU computational science efforts and the work of the Portland Institute for Computational Science (PICS).
The Coeus cluster was made possible through support from the National Science Foundation (NSF), the Army Research Office (ARO), and Portland State University. This cluster was provided by and built by Advanced Cluster Technologies.
It has an estimated peak performance of over 110 TFLOPs, Intel Omni-Path high performance networking, and approximately 1.7 PiB computational scratch storage
Using Coeus HPC Cluster
Visit the Coeus Getting Started page.
Login Nodes:
login1.coeus.rc.pdx.edu
login2.coeus.rc.pdx.edu
Understanding the Operating Environment
More information about the Coeus HPC
To view current load go to http://coeus.rc.pdx.edu
Coeus Specifications:
Two login nodes and two management nodes (head nodes) for hosting cluster management software and the system scheduler (SLURM)
2 x Dual Intel Xeon E2630 v4, 10 cores @ 2.2 GHz
64 GB 2133 MHz RAM
Data Transfer Node to support high-bandwidth data transfers
Dual Intel Xeon E2650 v4, 12 cores @ 2.2 GHz
256 GB 2133 MHz RAM
30 TB local disk storage in a RAID 6 array
Panasas storage appliance hosting scratch storage mounted in each node over 100 Gb ethernet
Intel Omni-Path high-performance (up to 100 Gbps) network fabric
128 compute nodes each with 20 cores and 128 GB RAM
2 x Dual Intel Xeon E2630 v4, 10 cores @ 2.2 GHz
Intel Omni-Path
12 Intel Phi processor nodes each with 64 cores and 96 GB RAM
Intel Xeon Phi 7210, 64 cores @ 2.2 GHz
Intel Omni-Path
2 large-memory compute nodes each with 24 cores, 756 GB RAM, and 1 Nvidia V100 GPU
2 x Dual Intel Xeon E2650 v4, 12 cores @ 2.2 GHz
1 x Nvidia V100 GPU
Intel Omni-Path
6 TB local storage
10 GPU nodes each with 32 cores, 128 GB RAM (minimum), and 4 Nvidia GPUs
AMD EPYC 7520P, 32 cores @ 2.5 GHz
4 x Nvidia A40 or RTX A5000 GPUs
100 Gb ethernet
2 TB NVMe local storage