NERSC Guidelines

Getting Started

General

NERSC is a DOE funded computing facility housed at Lawrence Berkeley National Laboratory. The main cluster, Hopper, has over 150,000 compute cores and 200 TB of memory. Any information not found here is most likely available at www.nersc.gov, which is a really good resource.

About the group allocation

    • 1.5 million hours for 2013 group allocation

    • Hours can be allocated to anyone within the group that has a NERSC account

    • Annual review cycle (deadlines)

Creating an account

Send the following information to the PI proxy for the group (currently Kyle):

    1. Full first and last name

    2. Preferred user name

    3. Country of citizenship

    4. Email address

    5. Telephone number

The PI proxy can add a new user via NIM:

    1. Select Actions -> Add/Revive User

    2. Enter users last name

    3. If user does not exist, then select "Add new user to NIM"

NIM (NERSC Information Management)

The NIM website, nim.nersc.gov, is where personal information for each account can be accessed. There you can check your account balance, change settings, etc.

New Users: First Steps

Setting your default shell

By default, a new user's shell is set to csh. This can by changed using the following:

    1. Log onto NIM

    2. Go to the "Actions" drop-down menu, select "Change Shell"

    3. Select your shell for each machine

Setting rc and profile files

The following table gives the files that are write protected and the files that should be edited instead. For each, the edited file (.bashrc.ext) is automatically sourced by the protected file (.bashrc).

Access to VASP binaries

Contact the PI proxy to get access to the VASP binaries that have been compiled by the NERSC admins and optimized for their systems. The PI proxy should follow the steps at www.nersc.gov/users/software/applications/materials-science/vasp.

Once you have access to the VASP binaries, you can use them by loading the vasp module using module load vasp (this can go into your .bashrc.ext file).

As an alternative, we have compiled our own VASP binaries. These should be located in the group shared directory.

Information for Users

Group shared directory

All members of the group have read/write/exe access to /global/homes/k/kylemich/share so that common files and programs can be stored there. *Please keep in mind that this is in Kyle's home directory, so keep the files small and delete those that are not necessary whenever possible.*

Common executables, including VASP, are located in /global/homes/k/kylemich/share/hopper/bin. It would be a good idea to include this in your path using (in the bash shell) export PATH=$PATH:/global/homes/k/kylemich/share/hopper/bin, which could also be set in your .bashrc.ext file.

Common commands

    • ssh username@hopper.nersc.gov to login to hopper, where username is your own username

    • qstat or qstat -u username to view the entire queue or to the view the jobs in the queue for username

    • qsub jobname to submit the job script in jobname

    • module list to view all modules that are currently loaded

    • module avail to view all modules that are available

    • module load modulename to load the module modulename

Available software at NERSC

There are a number of programs that have been installed by the NERSC admins. To see a complete list, go to http://www.nersc.gov/users/software/applications.

Queueing policies

    • Become familiar with the queue policies at http://www.nersc.gov/users/computational-systems/hopper/running-jobs/queues-and-policies. Note that you can explicitly submit a job as low priority and only be charged for half the hours that you use (although it will take longer to get through the queue). You can also submit as a high priority job to move through the queue more quickly, although you will be charged for twice the hours that you use.

    • It is possible to bundle jobs together and, assuming that you reach the medium job limit, be charged for only 60% of the hours that you use. If you do bundle jobs, try to ensure that each will finish at nearly the same time to avoid wasting hours when some jobs have finished but others are still running.

Acknowledging NERSC

Please add the following acknowledgment to any papers you write that use NERSC resources. This is very important!

This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

Submission scripts

General information

Single job

#!/bin/bash

#PBS -q regular

#PBS -l mppwidth=24

#PBS -l walltime=12:00:00

# Set the executable and number of cores per job

exe=vasp

nproc=24

# Move to working directory and submit job

cd $PBS_O_WORKDIR

aprun -n $nproc $exe >& stdout &

# Wait for completion

wait

Bundled job

#!/bin/bash

#PBS -q regular

#PBS -l mppwidth=48

#PBS -l walltime=12:00:00

# Set the executable and number of cores per job

exe=vasp

nproc=24

# Move to working directory

cd $PBS_O_WORKDIR

# Move to first directory and submit

cd DIR1

aprun -n $nproc $exe >& stdout &

# Move to second directory and submit

cd ../DIR2

aprun -n $nproc $exe >& stdout &

# Wait for completion

wait