High Performance Computing at HZDR
High Performance Computing (HPC) is one of the core tasks of the Department of IT Infrastructure. In order to provide the users the possibility to run large-scale computational calculations in the field of simulation and data processing at HZDR, the department of IT Infrastructure operates two Linux clusters.
The Linux Clusters
An overview of the nodes and queues of all HZDR clusters can be used to find out which groups of nodes are accessible.
The biggest cluster at HZDR is hypnos. It consists of 3 heads and 149 compute nodes. 34 of them are attached with GPUs.
The compute nodes contain 430 AMD processors (Opteron 16-core) and 80 Intel processors (Xeon 8-core or 16-core). A main memory of 4 GB or 8 GB is installed per core, which leads to 35 TB in sum. The number of CPU cores available at hypnos is around 7700. The 164 Nvidia GPUs contain about 360.000 cores.
The network is a 1GbE ethernet and additionally an InfiniBand fabric with a bandwidth of 40 or 56 Gbit/s, which qualifies hypnos for sequential and massively parallel jobs with a high communication rate. The operating system is Ubuntu Linux.
The theoretical general peak performance (single precision) amounts to 573,6 TFlop/s (CPU: 87,3 TFlop/s, GPU: 486,3 TFlop/s).
The current load of the hypnos cluster can be seen through a web portal.
The second cluster is hydra. It consists of two heads and 108 compute nodes. Each node has 2 Intel CPUs (Xeon 8-Core or 16-Core). In general hydra consists of more than 2600 CPU-cores and 16 TB of main memory.
The network is a 1GbE ethernet and additionally an InfiniBand fabric with a bandwidth of 40 Gbit/s, which qualifies hydra for sequential, coarse granulary parallel and massively parallel jobs. The operating system is Ubuntu Linux, too. The theoretical general peak performance (single precision) amounts to 110 TFlop/s.
The current load of the hydra cluster can be seen through a web portal.
On Sept. 14th 2018 hemera has started operation. Currently it consists of 68 CPU nodes each with 40 Intel Xeon Gold cores and 10 GPU nodes each with 28 cores and 4 Nvidia GPUs type Tesla P100.
In order to integrate new application scenarios into our HPC resources, some web-based interactive services have been developed which use the clusters via a web frontend.
Interactive data analysis using python can be used easily via Jupyter Notebook. Serveral data analysis processes can be logged, saved and continued at a later time. This tool can also be seen as a suitable alternative to Matlab for this purpose.
In order to use Jupyter Notebook sessions You need to have access to the HPC systems. You can apply for it by sending an e-mail to firstname.lastname@example.org.
Data can be stored on all HZDR fileservers and is available on all login nodes of the HPC clusters. Files which should be accessed or created by compute jobs on the cluster have to be stored in the cluster home directory of the user or in a project directory on /bigdata. Project directories are created at request.