Getting Started‎ > ‎

Jupyter Notebook

1. Overview

To allow more flexible ways to access the LRC resources HPCS provides the interactive Jupyter Notebook service in addition to the traditional SSH access. You can access this service via

To access this service please follow these steps:
  1. Login in with your LRC username and one-time password (OTP), the initial Jupyter screen presents a "Start My Server" button. Click that button.
  2. On the next screen, "Spawner options",  you will see a dropdown box to select how you want the Notebook server to be spawned. By default you should select "Local Server" for testing purpose. If you have the requirement to run serious compute with the Notebook it is recommended to select "LR2", "LR3", "LR4" or "MAKO" which will spawn into each partition respectively. Currently these options are limited to a single node and 8 hours of runtime.
  3. Select "Local Server" and now you should land in the home directory. From the "New" dropdown menu (next to 'Upload' near the top right of the screen) select "Python 2.7" and you should be in a Notebook with full support of the python/2.7 module tree that we provide on the cluster. 
This should provide a fully working Jupyter environment. To start working with Jupyter Notebooks, please see the Jupyter Documentation.

2. IPython Clusters

If you need to set up one or more  IPython Clusters, for parallel computing tasks, a little more configuration from the user side is needed. Please follow this step:

Login to the cluster via a terminal (or in Jupyterhub start a "Terminal" session via the "New" dropdown menu), then run the following command.

/global/software/sl-7.x86_64/modules/langs/python/3.5/bin/ipcluster nbextension enable --user

Now in the Jupyter page you should see the "Clusters" tab name changed to "IPython Clusters".
If not you will need to stop the current Jupyter server. To do that you will have to click the "Control Panel" button at the upright corner (next to the "Logout" button). From there click "Stop My Server" and then in the next screen wait a few seconds then click "My Server" (if you click it too fast you may experience a "The page isn't redirecting properly" problem, but refreshing it will fix itself - it is a hit-or-miss timing issue). Now the “Clusters” tab name should change to “IPython Clusters”.

If you click the "IPython Clusters" tab you will see a "default" profile which you are able to start a local IPython Cluster with a user-specified number of engines. If you are just testing the basic IPython Cluster concept, the "default" profile is sufficient. However if you have a requirement for serious compute, you will need to create your own configurable profile(s). Please follow these steps:

  1. Login to the cluster via a terminal (or in Jupyterhub start a "Terminal" session via the "New" dropdown menu), run

    profile create --parallel --profile=lr2

    Note that profile name could be anything, here "lr2" is used as an example.

  2. Within the same terminal, cd "$HOME/.ipython/profile_lr2" (here the profile name has to match the one that was just created), and add the following contents to the end of  "",

    import netifaces
    c.IPControllerApp.location = netifaces.ifaddresses('eth0')[netifaces.AF_INET][0]['addr']
    c.HubFactory.ip = '*'

    add the following contents to the end of  "",

    #import uuid
    #c.BaseParallelApplication.cluster_id = str(uuid.uuid4())
    c.IPClusterStart.controller_launcher_class = 'SlurmControllerLauncher'
    c.IPClusterEngines.engine_launcher_class = 'SlurmEngineSetLauncher'
    c.IPClusterEngines.n = 12
    c.SlurmLauncher.queue = 'lr2'
    c.SlurmLauncher.account = 'fc_xyz'
    c.SlurmLauncher.qos = 'lr_normal'
    c.SlurmLauncher.timelimit = '8:0:0'
    #c.SlurmLauncher.options = '--export=ALL --mem=10g'
    c.SlurmControllerLauncher.batch_template = '''#!/bin/bash -l
    #SBATCH --job-name=ipcontroller-fake
    #SBATCH --partition={queue}
    #SBATCH --account={account}
    #SBATCH --qos={qos}
    #SBATCH --ntasks=1
    #SBATCH --time={timelimit}
    c.SlurmEngineSetLauncher.batch_template = '''#!/bin/bash -l
    #SBATCH --job-name=ipcluster-{cluster_id}
    #SBATCH --partition={queue}
    #SBATCH --account={account}
    #SBATCH --qos={qos}
    #SBATCH --ntasks={n}
    #SBATCH --time={timelimit}

    /global/software/sl-7.x86_64/modules/langs/python/3.5/bin/ipcontroller --profile-dir={profile_dir} --cluster-id="{cluster_id}" & sleep 10
    srun /global/software/sl-7.x86_64/modules/langs/python/3.5/bin/ipengine --profile-dir={profile_dir} --cluster-id="{cluster_id}"
    Note all the comment lines there are for demonstration purposes; you can choose to use them or not. After pasting in a set of default values to the end of "", you will need to at least examine and change the values of one or more of the following entries, to specify your cluster scheduler account name (e.g. 'ac_something', 'co_something' ...), the queue (partition) and QoS on which you want to launch the cluster, and the wall clock time that the cluster will be active, to match a batch job configuration.

    c.SlurmLauncher.queue =
    c.SlurmLauncher.account =
    c.SlurmLauncher.qos =
    c.SlurmLauncher.timelimit =

    After adding and configuring these settings, you can go back to the Jupyterhub "IPython Clusters" tab to start a new IPython Cluster under your newly created profile, with a selected number of engines.

To begin working with your new IPython Cluster, please see the IPython Parallel Documentation.

3. New Kernels

If the default kernels that are provided do not meet your computation requirement, or if you need to use non-standard packages that are installed by yourself. You can install a new kernel yourself, please review the IPython Kernel Specs for details.

To add the kernel to your Jupyter environment, please use the following "kernel.json" file as a template. You will need to put this file in your "$HOME/.ipython/kernels/mykernel" directory. "mykernel" should be a meaningful name to you if you need to install multiple kernels this way.

"argv": [
"language": "python",
"display_name": "mykernel",
"env": {
"PATH" : "/bin:/usr/bin:/usr/local/bin:/path/to/myexec",
"LD_LIBRARY_PATH": "/path/to/mylib",
"PYTHONPATH" : "/path/to/mypythonlib"

Make sure $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, and other environment variables that you may use in the kernel are properly populated with the correct values.