Storing and using files on CWRU HPC and gallina

Red Hens who are affiliates of CWRU HPC are provided a CWRU ID of the form abc123. These affiliate accounts are scheduled to be deactivated automatically by CWRU IT administration after one year. Most affiliates continue to work with Red Hen after a year, so please alert Mark Turner to keep your account active or to reactivate it as needed. 

These Red Hen affiliates are also usually provided with a user account on CWRU HPC. The username  is identical to the CWRU ID, e.g. abc123. The user home for user abc123 would be /home/abc123. 

==============================================

Accessing files on gallina from the compute nodes:

https://sites.google.com/a/case.edu/hpcc/data-transfer/data-transfer-nodes

And study this email, but note that dtn1 is gone.
On Dec 14, 2021, at 2:59 PM, <hpc-support@case.edu> wrote:


Hi,


We will no longer mount Research Dedicated Storage (RDS) on the compute nodes to alleviate the issues maintaining their availability as well as their slower performance. The RDS will still be mounted on the login nodes and dtn nodes. Active jobs can access the RDS content by copying the pertinent folder to the /scratch/users or /scratch/pbsjobs space, and then copying the result back after the jobs are complete.


Suggested login/dtn nodes to use when copying from/to RDS: dtn[1-3], hpctransfer


Example in the job script:

# Create temporary scratch space

mkdir /scratch/users/<CaseID>

# Copy data from RDS to /scratch

ssh dtn2 "cp -r /mnt/rds/<rds name>/<folder1>  /scratch/users/<CaseID>"

# Copy data from /scratch back to RDS

ssh dtn2 "cp -r /scratch/users/<CaseID>/<folder1> /mnt/rds/<rds name>/."

==========================================================

Quotas on storage in user homes on CWRU HPC are very low. However, Red Hen has its own storage server within CWRU HPC. It is named "gallina."  Please store your working files in your "gallina home." Your gallina home is located in /mnt/rds/redhen/gallina/home. Your gallina home is a directory whose name is identical to your CWRU ID, e.g. /mnt/rds/redhen/gallina/home/abc123. Gallina is under RAID 2 redundancy, but this is light protection, so data loss is possible.  Please keep your own copies on your own devices of any important files, and store your code on your GitHub, which will usually be forked to Red Hen Lab's GitHub. Inside your gallina home, place a subdirectory named

safe

and place into your safe subdirectory important files for which you need an extra layer of backup or which should be kept by Red Hen, such as final versions of Singularity containers you have built for Red Hen operations, hand-curated data files, highly configured software, or anything else that helps others benefit from or build upon your work.

Your safe subdirectory is scheduled to be automatically backed up to UCLA nightly.

Place a README file in your gallina home explaining the contents, by filename or, if suitable, by subdirectory name.  Also place such a README file in your safe subdirectory. In each README, please describe any files that Red Hen should keep for its archive or operations.  

When an affiliate account is deactivated, Red Hen may clear outdated files from your gallina home.

Never store in CWRU HPC or gallina anything unrelated to your Red Hen work.

===================================

On Dec 14, 2021, at 2:59 PM, <hpc-support@case.edu> wrote:


Hi,


We will no longer mount Research Dedicated Storage (RDS) on the compute nodes to alleviate the issues maintaining their availability as well as their slower performance. The RDS will still be mounted on the login nodes and dtn nodes. Active jobs can access the RDS content by copying the pertinent folder to the /scratch/users or /scratch/pbsjobs space, and then copying the result back after the jobs are complete.


Suggested login/dtn nodes to use when copying from/to RDS: dtn[1-3], hpctransfer


Example in the job script:

# Create temporary scratch space

mkdir /scratch/users/<CaseID>

# Copy data from RDS to /scratch

ssh dtn2 "cp -r /mnt/rds/<rds name>/<folder1>  /scratch/users/<CaseID>"

# Copy data from /scratch back to RDS

ssh dtn2 "cp -r /scratch/users/<CaseID>/<folder1> /mnt/rds/<rds name>/."