High Performance Computing at HZDR
High Performance Computing (HPC) is one of the core tasks of the Department of IT Infrastructure. In order to provide the users the possibility to run large-scale computational calculations in the field of simulation and data processing at HZDR, the department of IT Infrastructure operates two Linux clusters.
The Linux Clusters
An overview of the nodes and queues/partitions of all HZDR clusters can be used to find out which groups of nodes are accessible. The tutorial explains usage basics. Some job script examples for standard applications can be used for the hemera cluster.
The hypnos cluster currently consists of 3 heads and 115 compute nodes.
The compute nodes contain 388 AMD processors (Opteron 16-core) and 34 Intel processors (Xeon 8-core or 16-core). A main memory of 4 GB or 8 GB is installed per core, which leads to 25 TB in sum. The number of CPU cores available at hypnos is around 6800.
The network is a 1GbE ethernet and additionally an InfiniBand fabric with a bandwidth of 40 or 56 Gbit/s, which qualifies hypnos for sequential and massively parallel jobs with a high communication rate. The operating system is Ubuntu Linux.
The current load of the hypnos cluster can be seen through a web portal.
On Sept. 14th 2018 hemera has started operation. Currently it contains 88 CPU nodes each with 40 Intel Xeon Gold cores and 24 GPU nodes each with 28 cores and 4 Nvidia GPUs type Tesla P100 or V100. Additionally the k20 and k80 GPU compute nodes of the hypnos cluster and the Intel-based compute nodes of the hydra cluster have been integrated into hemera. This leads to a sum of 258 compute nodes, 50 of them are attached with GPUs.
The network of the hemera cluster is mainly constructed of an EDR InfiniBand (100Gbit/s) with 2 ports per node. The operating system is CentOS Linux.
In order to integrate new application scenarios into our HPC resources, some web-based interactive services have been developed which use the clusters via a web frontend.
Interactive data analysis using python can be used easily via Jupyter Notebook. Serveral data analysis processes can be logged, saved and continued at a later time. This tool can also be seen as a suitable alternative to Matlab for this purpose.
In order to use Jupyter Notebook sessions You need to have access to the HPC systems. You can apply for it by sending an e-mail to firstname.lastname@example.org.
Data can be stored on all HZDR fileservers and is available on all login nodes of the HPC clusters. Files which should be accessed or created by compute jobs on the cluster have to be stored in the cluster home directory of the user or in a project directory on /bigdata. Project directories are created at request.