To have access to the HCSPH cluster, you need to first register an account for FASRC.
Complete the sign up form process at https://portal.rc.fas.harvard.edu/request/account/new. Your PI must have an existing account in order to appear as a sponsor. If they do not appear, you should ask them to sign up for an account first. Once you’ve signed up, your PI sponsor will receive an e-mail containing a link to approve your account. Once the PI has approved the request, your account will be created and you will receive notification emails with information on getting started. Please note that if you signed up as an external user, your request must be manually vetted and so may take slightly longer.
Your cluster usage will be tied to your lab’s fairshare and will be billed to your school on a quarterly basis.
Once you have the FASRC account sponsored by an HSPH PI, you should have access to the HCSPH cluster.
You can verify your accessibility by inputting the command in the terminal:
[jharvard@boslogin01 ~] groups
starfish_users cluster_users hsph_users
If you see hsph_users in the output, that means you have access to the HCSPH partition.
You can read the FASRC website for an introduction to the HCSHP Cluster.
There are two partitions in HCSPH cluster:
hsph: This block of 5824 Intel Sapphire Rapids cores. Each node is water-cooled and contains 112 cores, and 1TB of RAM. The nodes are interconnected with NDR Infiniband. This partition has a 3 day time limit.
hsph_gpu: This block of GPU’s contains 192 AMD Genoa cores and 8 Nvidia H100 GPUs. Each node is water-cooled and contains 96 cores, 1.5TB of RAM, and 4 GPU’s. The nodes are interconnected with NDR Infiniband. This partition has a 3 day time limit.
Please read the general tutorial on job running in FASRC. Here is a sample script to submit a job to hsph partition through SLURM:
#SBATCH --job-name=hsph_job # Job name
#SBATCH --output=%j.out # Output file name (%j expands to jobID)
#SBATCH --error=%j.err # Error file name (%j expands to jobID)
#SBATCH --partition=hsph # Partition name
#SBATCH --time=1:00:00 # Maximum runtime (HH:MM:SS)
#SBATCH --nodes=1 # Number of nodes
#SBATCH --ntasks-per-node=1 # Number of tasks per node
#SBATCH --cpus-per-task=1 # Number of CPU cores per task
#SBATCH --mem=4G # Memory required per node
# Load any required modules here
# module load ...
# Your commands go here
# For example:
# python my_script.py
If you are using hsph_gpu, just change the partition to --partition=hsph_gpu
You can also use the FASRC Open OnDemand dashboard to have a virtual desktop and run interactive jobs. Please first read the general tutorial for FASRC Open OnDemand. Here is an example of starting an interactive Jupyter notebook
Make sure you are connected to the FASRC VPN.
Point your browser to https://rcood.rc.fas.harvard.edu
Enter your FASRC credentials into the authentication form
Upon successful authentication you will land on the main dashboard.
Click the Jupyter notebook or other apps you want to use
6. In the form, select:
Partition: hsph
Memory: 4GB (or as needed)
Time: As needed (within partition limits, max 3 days)
Number of cores: As needed