HPC Home‎ > ‎

User Support


Looking for a Quick Start Guide? Click here.

This is the guide to using HPC system efficiently. You should be able to run your jobs. In case of a problem, resolve your issues on your own using HPC FAQ. If you still have further questions, Please email us at: 

Important Notes

  • Please DO NOT use the login-node (e.g. hpc1 or hpc2)  for running your jobs. Always use the "sbatch" command to run your jobs. If you are using interactive job submission -- running graphics (e.g. MATLAB), scripts, and other STDIO -- use the command "srun --x11 --pty bashwhich assigns you a compute node. Jobs running on the login-node will be killed. If you have already run your job, cancel it using command "kill <PID>". You can get PID by running command "top" at login-node. For killing all processes use:
    kill -9 `ps -ef | grep <caseID> | grep -v grep | awk '{print $2}'`
  • Each group has storage quota limits. To find more information about storage limits and disk usage please refer to Computational Resources. Also, your access to HPC resources is determined by our Access Policies.
  • For high memory jobs, see the section "High Memory Job" at HPC Interactive and Batch Submission.
  • Please read through HPC FAQ carefully if you encounter any issues. There are important topics on accessing cluster, X-forwarding for graphics, submitting and monitoring jobs, usage policies, technical help, etc. in the FAQs. 
  • If you are trying to access Cluster out of case network, you need to connect to VPN first. Please use CWRU's VPN site https://vpnsetup.case.edu.
  • Use $PFSDIR instead of $TMPDIR for scratch space for your job if you are using huge files or storage.
  • If your job creates a large number of output files, or you have a case where the number of files in a directory is huge, please follow the Panasas Storage Guideline for Huge Directory.