Introducing Bessemer - our new HPC system

posted 9 Oct 2019, 23:10 by Michael K Griffiths
We are pleased to announce that Bessemer, our new HPC system located in the JISC Shared Data Centre (North), is now available for use.  The system has over 1000 cpu-cores available for general use. Each compute node has 40 cores and 192GB RAM.  The system is intended to support high-throughput jobs which can run within a single node (up to 40 cores per job).

Getting started with Bessemer:

If you already have an Iceberg or ShARC account then you will be able to use SSH to login to bessemer.shef.ac.uk using your normal username and password.


Additional software will be installed in due course.

Bessemer uses the Slurm job scheduler - this is similar to the Grid Engine job scheduler used on Iceberg and ShARC, but you will need to modify your job scripts to run on Bessemer - more info here: https://docs.hpc.shef.ac.uk/en/latest/bessemer/slurm.html

Single jobs spanning multiple nodes will not be supported on Bessemer - large MPI jobs requiring more than 40 cores should continue to be run on ShARC.

Bessemer has its own filestore, separate from Iceberg and ShARC. Home areas on Bessemer have quotas of 100GB per user.  Home areas are backed up to a second physical location, with backups kept for 28 days.  Bessemer also has its own /fastdata lustre storage for temporary filestore which does not have any backups.  There is no /data filestore on Bessemer.  

Existing research shared areas can be made available to Bessemer on request, and new research shared areas can be created on Bessemer's storage servers if required.

Important: You should review our posting "Security of your data on the HPC service"  with information about the appropriate types of data which can be stored or processed using the University HPC services.

Future of Iceberg

Iceberg is now approaching the end of its life.  Following the introduction of Bessemer we will shortly be switching off the oldest nodes on Iceberg, reducing the available capacity for general use by ~800 cores, leaving 1600 cores available.  The remaining nodes in Iceberg will continue to be available for 1 year, and we expect to switch off Iceberg in autumn 2020.

Comments