Welcome to the Social Sciences Research & Development Environment (SSRDE) cluster.
This guide will help you connect, run jobs, and manage data on the system.
⚠️ Important: You have a 100GB quota for your home directory and up to a 1TB grace period for up to 35 days. After 35 days of over quota, your home directory will go into read-only until you are back under your 100GB quota.
⚠️ Important: The new cluster has only one default partition: "general". You no longer need to specify the partition name (general_short, general_gpu) in your job submissions. Old job submission scripts that specify the non-existent partition names will be rejected.
⚠️ Important: The new cluster has a new way to submit jobs to the GPUs. To request a GPU for your job, add the following to your submission:
#SBATCH --gres=gpu:1
ssh yourusername@ssrde.ucsd.edu
Use your UCSD AD credentials.
Off-campus access requires VPN.
/home/yourusername
42 TB total storage
Backups kept for 30 days
By request from your PI, cube/pentagon/sphere/brick storage can be mounted.
SFTP
sftp://ssrde.ucsd.edu
SMB (requires VPN unless wired in a Social Sciences office):
Windows: \\polygon.ucsd.edu\ssrde-storage_home
Mac: smb://polygon.ucsd.edu/ssrde-storage_home
Do not run compute jobs on the login node. Always use Slurm to submit jobs.
Example: Simple Batch Job
Create a file called testjob:
#!/bin/bash
echo This is my sample script
uname -a # system info
hostname # current hostname
date # current date/time
exit 0
Make it executable:
chmod u+rx ./testjob
Submit with Slurm:
sbatch ./testjob
Output appears in:
slurm-<jobid>.out
Using a Partition
Some queues have special hardware or limits. Example:
sbatch -p general_short ./testjob
Matlab job file (testjob.m)
disp("this is a simple matlab job")
bench
quit
Submission script (runmatlabjob)
#!/bin/bash
echo Starting matlab using $1 as input
module load matlab
matlab < $1
exit 0
Submit the Matlab job:
sbatch ./runmatlabjob testjob.m
Output is written to:
slurm-<jobid>.out
Check queues
sinfo
Check your jobs
squeue -u yourusername
Cancel a job
scancel <jobid>
Cancel all your jobs
scancel -u `whoami`
Never run compute jobs on the login node — always use sbatch or srun.
Cluster storage is temporary workspace — archive long-term data elsewhere.
Use partitions wisely (-p flag) — match job to hardware (GPU vs general).
Be explicit with resources (--mem-per-cpu, --time, --ntasks).
Set job names and outputs (-J, -o job-%j.out) to track jobs easily.
Use interactive jobs (srun --pty bash) only for testing or debugging.
Check job limits — maximum runtime is 7 days; jobs longer than this are killed.
Monitor resource usage with sacct and adjust future requests for efficiency.
Cancel unwanted jobs promptly to free resources for others.
Use man pages for help (e.g. man sbatch, man sinfo).
Start small and scale up — test your scripts with minimal resources before large jobs.
Be considerate — avoid requesting excessive CPUs/memory unless required.