Sól is a computational cluster hosted at the Science Institute, University of Iceland and is used by researchers in the field of chemistry and physics and other subjects.
To use the cluster you need an account (username and password) and you will then get a user directory where you can host your files and submit jobs to the cluster. When you submit a job to the cluster you use some kind of script or program and submit to the queuing system of the cluster (called PBS).
You connect to Sól using an SSH client (like Bitvise on Windows, Terminal program on Mac). The Sól hostname is sol.raunvis.hi.is and the recommended port name is 1047.
An SFTP client (like Bitvise on Windows or Filezilla on Mac/Windows) can also connect to Sól for easy file transfer (but remember that you need to tell the program to connect via SFTP, not regular FTP. See Setting up programs page for more on this.
On Mac/Linux you simply do: ssh -p 1047 username@sol.raunvis.hi.is and then type in your password. Note that when you type in the password, nothing shows up on the screen, this is normal.
Sól is a Linux cluster running Centos. Almost everything on Sól is performed via the Linux/Unix command line. See The command line page for the important commands to learn.
Create a directory for your exercise on Sól with your inputfile
Here are some basic commands to get started. You need to learn a few others. Check regularly the page 2. The command line to learn new commands and refresh your memory.
mkdir exercise1 # This creates a directory called exercise1
cd exercise1 # This enters the exercise1 directory
pwd # This allows you to confirm that you are in the directory called exercise1
nano h2o-exercise1.inp # This launches the program nano and an empty file called h2o-exercise1.inp. This will be our ORCA inputfile.
# You can either write your inputfile information here or perhaps copy/paste information you have into the file.
Ctrl+O # This nano command saves the changes to file called h2o-exercise1.inp. Program asks where to save.
Ctrl+X # This nano command exits nano. These commands are listed at the bottom of the windows when inside nano.
ls # This command shows the files in the directory you are in.
ls -ltr # This command gives a list view of the files in the directory in reverse time order (see newest files at the bottom).
If you want to make more changes to your inputfile called h2o-exercise1.inp you can simply do:
nano h2o-exercise1.inp #Then Ctrl+O to save and Ctrl+X to quit.
Submitting ORCA jobs
To submit ORCA jobs to Sól it is easiest to use the suborca command that should be available to you (let me know if this is not the case).
You use suborca typically like this:
suborca filename.inp
where filename is the name of the ORCA inputfile that you have created (How to create the ORCA inputfile is explained in 3. Preparing a molecule for calculation) and want to submit to the cluster. By default, suborca, will submit calculations to the teaching queue, and will make use of specially assigned teaching nodes if they are available.
There are a few different queues on the cluster that you can also submit to depending on the estimated length of the job. If you have a short job that should finish in less than 2 hours (this goes for almost all the simple exercises in the course) then you simply submit to the short queue like this:
suborca filename.inp
or
suborca -s filename.inp
If you have a job that will take somewhere between 2 hours and 48 hours then you should submit to the medium queue instead:
suborca -m filename.inp
If you have a job that will take somewhere between 48 hours and 288 hours then you should submit to the long queue:
suborca -l filename.inp
If you have a job that will take somewhere between 288 hours and 864 hours then you should submit to the verylong queue:
suborca -vl filename.inp
You can always type suborca to get a reminder of these commands.
Should you submit a job that takes longer than the queue allows for (once the jobs starts), your job will be killed while still running. You usually need to restart your job in that case (usually by replacing the input geometry with the last geometry from the killed job).
The suborca script will automatically look inside your inputfile and check how many cores you have requested for your ORCA job. The default is 1 core but this can also be changed by the PalX keyword. Adding the Pal8 keyword to the simpleinput line (the one that begins with a ! ) will request 8 cores for the ORCA job.
The suborca script will realize this and will request a whole 8-core node from the queuing system.
The PBS queuing system
Useful queuing system commands:
qstat Check all the jobs running on the cluster:
The output above shows that a job called "test", has just been submitted by rbjorns (me) and has been assigned a Job ID of 16821. The job was submitted to the short queue and got an R label immediately that means the job is running (a Q symbol would mean that it is waiting in the queue).
showq Another way of checking the jobs running.
qstat -u username (where username is your username). Check how your jobs are doing.
qdel JobID Delete a job (give JobID that you see using qstat) that you submitted (and are no longer interested in, presumably). ORCA jobs that finish are automatically deleted from the queue.
pbsnodes Shows the status of all nodes on the cluster.
diagnose -n Also shows status of nodes