A CMSSW working environment is created and jobs are executed on this cluster as at any other CMS machine. After you login, setup environment by executing:
source /cvmfs/cms.cern.ch/cmsset_default.sh
export CMSSW_GIT_REFERENCE=/cvmfs/cms.cern.ch/cmssw.git.daily
cmsrel CMSSW_X_Y_Z
cd CMSSW_X_Y_Z/src/
cmsenv
cmsRun yourConfig.py
Further details on CMSSW can be found at the Workbook and in the tutorials.
There is a lot of additional software packages available through the LHC Computing Grid (LCG). For more information on what is available, consult: LCG software information, and you can find a list of packages here: lcginfo.cern.ch. To setup the software from cvmfs, do the following (tcsh) - a specific example version is given - please pick your own preferred example LCG_**
Note /cvmfs/sft.cern.ch/lcg/mapfile.txt contains the list of all the possible software/directories:
source /cvmfs/sft.cern.ch/lcg/views/LCG_96/x86_64-centos7-gcc8-opt/setup.sh
Note that this may set different versions of python and root than your typical CMSSW, so you may wish to not use this command in the same shell as a CMSSW software environment.
You can also configure a standalone ROOT (not in CMSSW!) - be sure your gcc compiler is a similar version (note the LCG environment above will pickup a standalone ROOT). Note that this will NOT work with python as you want the complete LCG setup in which the appropriate ROOT and python versions are both in your environment.
For the most recent version of ROOT, browse here:
$ ls /cvmfs/sft.cern.ch/lcg/releases/LCG_latest/ROOT
# you can setup ROOT by executing following line (version 6.20.00)
$ source /cvmfs/sft.cern.ch/lcg/releases/LCG_latest/ROOT/6.20.00/x86_64-centos7-gcc8-opt/bin/thisroot.sh
Python3
Python3 is installed by default on all the interactive nodes, to find the version, type python3 --version.
CMSSW code may possibly be associated with a different version. CMSSW_10_1_0 and above come with python (2) and python3. To find out which python you are using use: which python
You can run it with:
$ python3
hepcms-henrietta.umd.edu is an EL9 machine.
To run jupyter-notebook on that hepcms-henrietta.umd.edu use following.
ssh -L 8080:localhost:8080 {username}@hepcms-henrietta.umd.edu
Once logged it, run following command
source /etc/profile.d/conda.sh
conda activate jupyter_ml
jupyter lab --no-browser --port=8080
It will print info like
To access the server, open this file in a browser:
file:///home/bhatti/.local/share/jupyter/runtime/jpserver-969961-open.html
Or copy and paste one of these URLs:
http://localhost:8080/lab?token=d71b685014a1382acfdef16e3201c92d42a5958b7a5d4304
http://127.0.0.1:8080/lab?token=d71b685014a1382acfdef16e3201c92d42a5958b7a5d4304
On your local machine
http://localhost:8080/lab?token=d71b685014a1382acfdef16e3201c92d42a5958b7a5d4304
Please note that this environment may interfere with your CMSSW setup as two environments may use different versions of python.
Spyder (avialable on hepcms-henrietta.umd.edu)
Spyder (Scientific Python Development Environment) is an open-source, cross-platform IDE (Integrated Development Environment) designed for scientific computing, data analysis, and machine learning in Python.
The jupyter_ml is not compatable with spyder environment. We need to setup new environment
Login into hepcms-henrietta.umd.edu
source /etc/profile.d/conda.sh
conda activate spyder-env
spyder (it will open IDE envirment)
You deactivate conda environment using the command conda deactivate
JupyterNotebook on hepcms-franklin.umd.edu (EL8 machine with GPU support)
Jupyter Notebook and other related packagares are installed on hepcms-franklin.umd.edu. To access it, one needs to setup Anaconda using by running command assuming you are using bash shell.
eval "$(/franklin/local/anaconda3/bin/conda shell.bash hook)"
jupyter-notebook # to start notebook
# You can use tunneling to speed up by using browser on local machine
hepcms-franklin.umd.edu is Dell server with four L40 Nvidia GPUs.
device /job:localhost/replica:0/task:0/device:GPU:0 with 43150 MB memory: -> device: 0, name: NVIDIA L40, pci bus id: 0000:4a:00.0, compute capability: 8.9
device /job:localhost/replica:0/task:0/device:GPU:1 with 43516 MB memory: -> device: 1, name: NVIDIA L40, pci bus id: 0000:61:00.0, compute capability: 8.9
device /job:localhost/replica:0/task:0/device:GPU:2 with 43516 MB memory: -> device: 2, name: NVIDIA L40, pci bus id: 0000:ca:00.0, compute capability: 8.9
device /job:localhost/replica:0/task:0/device:GPU:3 with 43516 MB memory: -> device: 3, name: NVIDIA L40, pci bus id: 0000:e1:00.0, compute capability: 8.9
hepcms-franklin.umd.edu has local disk (/franklin/users/) which can be used to store limited amount of data. It will not be backed-up. It is only locally mounted and can not be accessed from other nodes. If you need a directory on this disk, please send email to bhatti@umd.edu and jabeen@umd.edu.
On the local desktop connect to remote server using ssh
On windows cmd prompt/or linux terminal /mac connect to remote server.
ssh -L 8080:localhost:8080 {USERNAME}@hepcms-franklin.umd.edu
On the remote cluster:
Setup conda environment and start notebook using commands below.
eval "$(/franklin/local/anaconda3/bin/conda shell.bash hook)"
jupyter-notebook --no-browser --port=8080
One server node, jupyternote book will print something like :
To access the server, open this file in a browser:
file:///home/bhatti/.local/share/jupyter/runtime/jpserver-1976309-open.html
Or copy and paste one of these URLs:
http://localhost:8080/tree?token=b21a5e4e3bccc5c1d9e2718d55d5eae97a1d0d3434adbd52
http://127.0.0.1:8080/tree?token=b21a5e4e3bccc5c1d9e2718d55d5eae97a1d0d3434adbd52
Use above link to open jupyter notebook on your local computer.
PYTORCH/TENSOR FLOW
One can run tensorflow/pytorch on hepcms-franklin.umd.edu in JupyterNotebook by using the same commands as above. er
In open Jupyter notebook, one can
import torch
or
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
Both pytorch and tensorflow are able to use GPUs.
The system has four GPUs. You should be considerate of other people using GPUs.
Here is chatgpt suggestion to use only 1 GPU in tensorflow.
import tensorflow as tf
# List all GPUs
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Restrict TensorFlow to only the first GPU tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(f"{len(gpus)} Physical GPUs, {len(logical_gpus)} Logical GPUs")
except RuntimeError as e:
# Visible devices must be set before GPUs are initialized
print(e)
Qiskit is utilizing your GPU
Qiskit is installed. It can use GPU
import qiskit
from qiskit_aer import AerSimulator
backend = AerSimulator()
print(backend.available_devices())