Storage
Storage is a very important component of the High Performance Computing Cluster at Case Western Reserve University. Without it, data cannot be stored or processed or distributed. The storage also needs to support running thousands of jobs on the cluster concurrently.
There are 2 main storage systems directly mounted on the HPC:
HPC Storage
High Performance storage that is mounted on the HPC. The current capacity is 750TB and it has the general 7-day snapshot policy. HPC storage has several important filesystems that form the backbone of the cluster:
/home: location of the default user workspace.
/home has limited capacity based on the group quota, which is dependent on the member/guest group status.
Member has 700/920GB soft/hard quota, while guest has 150/260GB soft/hard quota.
Group that has exceeded the soft quota can still write into the workspace, but will receive a daily warning about the excess.
Group that has hit the hard quota, will no longer be able to write into the workspace, until the usage is below 95% of the hard quota (874GB).
The best command to check the group quota (and the usage total by the user) is "panfs_quota -G"/usr/local:
All installed software/applications are located here./scratch/pbsjobs: location of the job workspace.
Typical job has a defined $PFSDIR which resides on the /scratch/pbsjobs/job.<JOBID>.hpc
All groups have a starting 1TB group quota limit on this workspace.
Members who need more than 1TB group quota may request additional quota by contacting us./scratch/users: location of project workspace.
Member has 1TB group quota limit, and may request additional quota by contacting us.
Guest has a nominal 10GB group quota limit, and cannot request for any additional allocation./mnt/pan: location of leased project workspace
Any research group may pay for a leased storage volume on the HPC Storage at the prevailing rate (Currently $200/TB/year).
Having this volume would benefit the research group that needs a large processing/analysis space for their workflow.
Research Storage
General-purpose storage that is mounted on the HPC and can be accessible from other campus locations as well. Research Storage capacity is currently 1.1PB and it is replicated to a duplicate site and has the general 7-day snapshot policy.
Additional Storage Options
Research Dedicated Storage
For research groups that require more than 100TB of storage, an inexpensive option is to acquire 2 storage servers (that replicate each other). Such storage servers would cost around $45k and can provide ~400TB of storage.
The RDS servers are mounted to login and data transfer servers, but not to the compute nodes. Copying data for use in workflows is described at the following link: https://sites.google.com/a/case.edu/hpcc/data-transfer#h.nltp3jo2szbu
Research Archival Storage
If the data is no longer analyzed, it can be placed in a cold/archival storage, to be used rarely. For such cases, the Research Computing group provides Research Archival Storage service.
We use Ohio Supercomputing Project Storage/Tape system for the Archival service, and Globus Data Transfer tool to move the data into the Archive. The Archive service would force for an encrypted transfer, making the transfer more secure.
The data from both HPC Storage and Research Storage can be archived using Globus where users can manage their archival process mostly by themselves.
Visit HPC Guide to Archival Storage for detail information.