Porträt Dr. Schulz, Henrik; FWCI

Photo: HZDR / Oliver Killig

Dr. Henrik Schulz

Head IT Infrastructure
Phone: +49 351 260 3268

High Performance Computing at HZDR

High Performance Computing (HPC) is one of the core tasks of the Department of IT Infrastructure. In order to provide the users the opportunity to run large-scale computational calculations in the field of simulation and data processing at HZDR, the department of IT Infrastructure operates one central Linux cluster.

The Linux Cluster

An overview of the nodes and queues/partitions of all HZDR clusters can be used to find out which groups of nodes are accessible. The tutorial explains usage basics. Some job script examples for standard applications can be used for the hemera cluster. On the InfoHub there is a large documentation on hemera and its usage.

HPC-Cluster Hypnos at HZDR


On Sept. 14th 2018 hemera has started operation. Currently it contains 241 CPU nodes (98 nodes with 40 Intel Xeon Gold cores and 28 nodes with 128 AMD Rome cores aswell as 115 nodes with Intel CPUs of older types). Furthermore hemera contains 350 Nvidia GPUs of the generations k20, k80, P100, V100 and A100 .

The network of the hemera cluster is mainly constructed of an EDR InfiniBand (100Gbit/s) with 2 ports per node. The operating system is CentOS Linux.

The current load of the cluster can be seen via the Ganglia web portal.

Interactive Services

In order to integrate new application scenarios into our HPC resources, some web-based interactive services have been developed which use the clusters via a web frontend.

Jupyter Notebook

Interactive data analysis using python can be used easily via Jupyter Notebook. Serveral data analysis processes can be logged, saved and continued at a later time. This tool can also be seen as a suitable alternative to Matlab for this purpose.

In order to use Jupyter Notebook sessions You need to have access to the HPC systems. You can apply for it by sending an e-mail to

Data Storage

Data can be stored on all HZDR fileservers and is available on all login nodes of the HPC clusters. Files which should be accessed or created by compute jobs on the cluster have to be stored in the cluster home directory of the user or in a project directory on /bigdata. Project directories are created at request.