Contact

Porträt Dr. Schulz, Henrik; FWCI

Photo: HZDR / Oliver Killig

Dr. Henrik Schulz

Head IT Infrastructure
h.schulzAthzdr.de
Phone: +49 351 260 3268

High Performance Computing at HZDR

High Performance Computing (HPC) is one of the core tasks of the Department of IT Infrastructure. In order to provide the users the opportunity to run large-scale computational calculations in the field of simulation and data processing at HZDR, the department of IT Infrastructure operates one central Linux cluster.

The Linux Clusters

Logo of the HPC cluster RoSI at HZDR

RoSI

On Sept. 22nd 2025 the HPC cluster RoSI (Rossendorf Supercomputing Infrastructure) has officially started operation as part of an introductory workshop (videos of Day 1 and Day 2).

The cluster is based on the Linux derivative Ubuntu and the resource management is done with the help of the SLURM scheduler. For the network interconnect InfiniBand is used in the generations HDR (200 Gbit/s) and NDR (up to 800 Gbit/s). Also the connection to the parallel file system /bigdata is done with that. Access to the cluster is possible via the login nodes rosi4 and rosi5 as well as via the web access rosi.hzdr.de which is based on Open OnDemand.

CPU and GPU compute nodes can be accessed via the following partitions:

Partition Nodes CPU GPU
cpu-skylake csk[001-056] Intel Xeon Gold 6148  
cpu-rome cro[001-028] AMD Epyc 7702  
cpu-milan cmi[001-032] AMD Epyc 7713  
cpu-genoa cge[001-070] AMD Epyc 9654  
cpu-turin      
gpu-v100 gv[001-032]   Nvidia V100
gpu-a100 ga[001-015]   Nvidia A100
gpu-h100 gh[001-003]   Nvidia H100
gpu-b200 gb008   Nvidia B200

hemera

On Sept. 14th 2018 hemera has started operation. Currently it contains CPU nodes with 40 Intel Xeon Gold cores and 28 nodes with 128 AMD Rome cores aswell as 115 nodes with Intel CPUs of older types. Furthermore hemera contains several Nvidia GPUs of the generations P100 and V100.

The network of the hemera cluster is mainly constructed of an EDR InfiniBand (100Gbit/s) with 2 ports per node. The operating system is CentOS Linux.

The current load of the cluster can be seen via the Ganglia web portal.

An overview of the nodes and queues/partitions of the hemera cluster can be used to find out which groups of nodes are accessible. The tutorial explains usage basics. Some job script examples for standard applications can be used for the hemera cluster. On the InfoHub there is a large documentation on hemera and its usage.

Interactive Services

In order to integrate new application scenarios into our HPC resources, some web-based interactive services have been developed which use the clusters via a web frontend.

Jupyter Notebook

Interactive data analysis using python can be used easily via Jupyter Notebook. Serveral data analysis processes can be logged, saved and continued at a later time. This tool can also be seen as a suitable alternative to Matlab for this purpose.

In order to use Jupyter Notebook sessions You need to have access to the HPC systems. You can apply for it by sending an e-mail to cluster-admin@hzdr.de.

Data Storage

Data can be stored on all HZDR fileservers and is available on all login nodes of the HPC clusters. Files which should be accessed or created by compute jobs on the cluster have to be stored in the cluster home directory of the user or in a project directory on /bigdata. Project directories are created at request.