this is about hic cluster at lbl.gov - to connect use ssh hic.lbl.gov
nodes are called hiccup<number> - see layout below for more details
SLURM is installed (https://slurm.schedmd.com/quickstart.html) - jobs executed per core mode (note two queues std (max time 24h max) or quick (2 hours max) - do not get too attached to those - we are still in testing mode; let me know if you need your own queue...
please have jobs output to /storage (local on nodes mounted to login node under /rstorage) as much as possible (not /home)
If you need to move to a higher version of the gcc (default is 4.8.5) etc...
scl enable devtoolset-7 bash
OR to keep the current shell
source scl_source enable devtoolset-7
With this you should see:
> gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,lto --prefix=/opt/rh/devtoolset-7/root/usr --mandir=/opt/rh/devtoolset-7/root/usr/share/man --infodir=/opt/rh/devtoolset-7/root/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --with-default-libstdcxx-abi=gcc4-compatible --with-isl=/builddir/build/BUILD/gcc-7.3.1-20180303/obj-x86_64-redhat-linux/isl-install --enable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 7.3.1 20180303 (Red Hat 7.3.1-5) (GCC)
Newer set available - with gcc 9.3.1
source scl_source enable devtoolset-9
To use https://github.com/matplo/heppy and https://matplo.github.io/pyjetty/ (both now public)
source scl_source enable devtoolset-7
module use /software/users/ploskon/pyjetty/modules
module load pyjetty/1.0
Note: heppy setup includes a ROOT 6 installation.
See more at https://matplo.github.io/pyjetty/
Test with:
$PYJETTY_DIR/pyjetty/examples/pythia_gen_fastjet_lund_test.py
New integrations - August 2021
A HI background generator - TennGen
a background generator / UE tuned to data developed by our colleagues at UTK (includes vn features)
ref. paper: https://arxiv.org/abs/2005.02320
integration with pyjetty: https://github.com/matplo/TennGen
notes:
made it work now with ROOT 6 (authors claim compatibility with ROOT 5 only)
test example: https://github.com/matplo/pyjetty/blob/master/pyjetty/sandbox/test_TennGen.py
A Glauber Monte Carlo - TGlauberMC
code from hepforge: https://tglaubermc.hepforge.org
ref. paper: https://arxiv.org/abs/1710.07098
integration with pyjetty (not a C macro anymore - MP modified): https://github.com/matplo/TGlauberMC
notes:
it does work now with ROOT 6 (authors claim compatibility with ROOT 5 only)
integration with pyjetty is the latest version (3.2) written up in the ref. paper
integration requires MathMore
some features need tuning - access to data on different projectiles require files in local directory
examples:
To build both into your pyjetty (the [] are optional args...):
heppy_pipenv run <pyjettydir>/cpptools/build.sh [--tenngen] [--tglaubermc] [--clean]
one can find /scratch useful - not exported (both /storage and /scratch are on local disks of the nodes) - please do not flood the scratch it may render node not usable
/storage of each active node is seen at /rstorage on hiccup0 (the login node) or under /remote_storage/hiccupX
the /rstorage is re-exported to all nodes but read-only
drop your software to /home/software/users/ (mounted rw on all nodes - just as /home is)
this is a setup and we are looking into better solutions:
it allows writing to all disks from the login node via hiccup0 /rstorage
it allows writing to local disks on nodes to /storage
it allows reading from all disks from the compute nodes /rstorage
read & write /home/<uname>
software installed on hiccup0 at /home/software/users is of course also visible on the nodes
If you need software / service installed by root just drop me an email (mploskon@lbl.gov)
MP successfully installed ALIROOT and ALIPHYSICS using the devtoolset-9 although unclear if this is really needed. Using:
aliBuild build AliPhysics --defaults next-root6
To try it out / use it:
source scl_source enable devtoolset-9
export ALIBUILD_WORK_DIR=/home/software/users/ploskon/alice-native/sw
alienv enter AliPhysics/latest-next-root6
Side comment: building within alidock also worked.
Notes:
/home is served from hiccup0
/rstorage is served from hiccupds
hiccupds is a new file server (thanks RNC!; thanks RNC and Neutrino's for the disks)
hiccupds is at 192.168.20.120 and it is just below the last rack of compute nodes
see the compute nodes below (hiccup0 is a login/interactive node)