Looking for a Quick Start Guide? Click here.
This is the guide to using HPC system efficiently. You should be able to run your jobs. In case of a problem, resolve your issues on your own using HPC FAQ. If you still have further questions, Please email us at:
- Please DO NOT use the login-node (e.g. hpc1 or hpc2) for running your jobs. Always use "sbatch" command to run your jobs. If you are using interactive job submission: running graphics (e.g. MATLAB), scripts, and other STDIO, use the command "srun --x11 --pty bash" which assigns you a compute node. Jobs running on login-node will be killed. If you have already run your job, cancel it using command 'kill <PID>". You can get PID by running command "top" at login-node.
- Each group has storage quota limits, to find more information on storage limits and disk usage please refer to Computational Resources. Also, the access to resources is determined by Access Policies.
- Please go through HPC FAQ carefully if you encounter any issues. There are also important topics on accessing cluster, X-forwarding for graphics, submitting and monitoring jobs, usage policies, technical help, and so on.
- If you are trying to access Cluster out of case network, you need to connect to VPN first. Please use CWRU's VPN site https://vpnsetup.case.edu.
- Use $PFSDIR instead of $TMPDIR for scratch space for your job if you are using huge files or storage.
- If your job is creating large number of output files or you have the case where the number of files in a directory is huge, follow the Panasas Storage Guideline for Huge Directory.