Research Compute Infrastructure
EDIH Hamburg
EDIH Hamburg
The Research Compute Infrastructure (RCI) is a cluster system that offers an efficient and collaboration-friendly solution, especially for handling large-scale computational tasks. It allows users to access computational resources through a centralized system which is designed to distribute the workload across multiple interconnected nodes. This ensures fast execution and optimal resource utilization. Whether you are working with massive datasets, running complex simulations, or training deep learning models, RCI offers the power and flexibility needed for high-performance computing.
RCI operates on a network of nodes, each representing an individual machine within the cluster. The nodes are connected by high-speed networking, allowing them to work together to handle demanding workloads.
All of the nodes in the cluster have CPU cores, ram and storage. Some of the nodes additionally have GPUs which needs get utilized on demand. Currently our RCI beholds * GPU nodes. Besides that we have * nodes and * head node which handles the login. This node structure allows RCI to efficiently distribute jobs based on the specific needs of the task.
The node structure of RCI is designed to efficiently distribute jobs based on the specific needs of each task.
Similar to other cluster interfaces we do not allow running jobs on the head/login node . The head node is used for login, job submission, management and monitoring . It is not to be used for computing. To compute on any other node you need to submit batch jobs with slurm. This is an important part of computing on RCI so please if you did not have any experience with Slurm please check our Slurm guide or Wikipedia.
When a job is submitted, the system will evaluate the resources required (e.g., CPU, memory, storage, or GPUs) and select the most appropriate node or set of nodes to execute the task. This ensures that each job runs on the hardware that best suits its resource requirements.