PRACE Resources

The PRACE RI provides access to distributed persistent pan-European world class HPC computing and data management resources and services. Expertise in efficient use of the resources is available through participating centers throughout Europe.
Available resources are announced for each Call for Proposals.

PRACE production systems (in alphabetical order of the systems’ names):

Hazel Hen, GCS@HLRS, Germany

© Simon Sommer for HLRS

Hazel Hen is the new Cray XC40 system (upgrade of Hornet system) and is designed for sustained application performance and highly scalable applications. It delivers a peak performance of 7.42 Petaflops. This new system is composed of 7 712 compute notes with a total of 185 088 Intel Haswell E5-2680 v3 compute cores. Hazel Hen features 965 Terabyte of Main Memory and a total of 11 Petabyte of storage capacity spread over 32 additional cabinets containing more than 8 300 disk drives. The input/output rates are +/- 350 Gigabyte per second. For technical assistance: prace-support@hlrs.de

JOLIOT Curie, GENCI@CEA France

© CEA

JOLIOT CURIE of GENCI, located in France at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris. JOLIOT CURIE is a Atos/BULL Sequana system X1000 based on a balanced architecture (compute, memory, network and I/O) with 2 compute partitions :
SKL (standard x86)

  • 1 656 compute nodes, each with Intel Skylake 8168 24-core 2.7 GHz dual processors, for a total of 79 488 cores and 6.86 PFlop/s peak performance
  • 192 GB of DDR4  nodesmemory per node – (4GB/core)
  • InfiniBand EDR interconnect

KNL (manycore x86)

  • 828 Intel KNL 720 nodes each with a 1.4 GHz 68-core processor and 16 GB of MCDRAM for total peak performance of 2.52 PFlops
  • 96 GB of DDR4 memory / node
  • BULL BXI high speed interconnect

25 additional nodes for post processing and remote visualisation, access to a 500 GB/s multi level Lustre filesystem. Please find further information here. For technical assistance: hotline.tgcc@cea.fr

JUWELS, GCS@FZJ, Germany

© Forschungszentrum Jülich / R.-U. Limbach

The successor system of JUQUEEN called Jülich Wizard for European Leadership Science (JUWELS) is a milestone on the road to a new generation of ultraflexible modular supercomputers targeting a broader range of tasks – from big data applications right up to compute-intensive simulations. With its first module alone, JUWELS qualified as the best German computer for the TOP500 List of the fastest supercomputers in the world published today. The Cluster module, which was supplied in spring 2018 by French IT company Atos in cooperation with software specialists at German enterprise ParTec, is equipped with Intel Xeon 24-core Skylake CPUs and excels with its versatility and ease of use. It has a theoretical peak performance of 12 petaflop/s, which is equivalent to the performance of 60 000 state-of-the-art PCs. The nodes are connected to a Mellanox InfiniBand high-speed network. Another unique feature of the module is its novel, ultra-energy-efficient warm-water cooling system. For further information please read here .
For technical assistance: sc@fz-juelich.de

MARCONI, CINECA, Italy

© CINECA

CINECA’s Tier-0 system named MARCONI provides access to PRACE users since July 2016. The MARCONI system is equipped with the new Intel Xeon processors and it has two different partitions:

  • Marconi – Broadwell (A1 partition) consists of ~7 Lenovo NeXtScale racks with 72 nodes per rack. Each node contains 2 Broadwell processors each with 18 cores and 128 GB of DDR4 RAM.
  • Marconi – KNL (A2 partition) was deployed at the end of 2016 and consists of 3 600 Intel server nodes integrated by Lenovo. Each node contains 1 Intel Knights Landing processor with 68 cores, 16 GB of MCDRAM and 96 GB of DDR4 RAM.
    The entire system is connected via the Intel OmniPath network. The global peak performance of  the Marconi system is 13 Petaflops. In Q3 2017 the MARCONI Broadwell partition will be replaced by a new one based on Intel Skylake processors and Lenovo Stark architecture, reaching a total computational power in excess of 20 Petaflops. For technical assistance: superc@cineca.it

MareNostrum 4 Supercomputer, BSC, Spain

© BSC

MareNostrum 4 Supercomputer – hosted by BSC in Barcelona, Spain.

MareNostrum is based on Intel latest generation general purpose Xeon E5 processors with 2.1 GHz (two CPUs with 24 cores each per node, 48 cores/node), 2 GB/core and 240 GB of local SSD disk acting as local /tmp. A total of 48 racks, each with 72 compute nodes, for a total of 3 456 nodes. A bit more than 200 nodes have 8GB/core. All nodes are interconnected through an Intel Omni-Path 100Gbits/s network, with a non-blocking fat tree network topology.
MareNostrum has a peak performance of 11,14 Petaflops. For technical assistance: support@bsc.es

 

Piz Daint, ETH Zurich/CSCS, Switzerland

Piz Daint sinistra persona 3_web
© CSCS

Piz Daint supercomputer is a Cray XC50 system and the flagship system at CSCS – Swiss National Supercomputing Centre, Lugano.

Piz Daint is a hybrid Cray XC50 system with a 4 400 nodes available to the User Lab. The compute nodes are equipped with an Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB. The nodes are connected by the “Aries” proprietary interconnect from Cray, with a dragonfly network topology. Please visit for further information the CSCS website. Please visit for further information the CSCS website.  For technical questions: help(at)cscs.ch

SuperMUC-NG, GCS@LRZ, Germany

© LRZ

SuperMUC-NG is the Tier-0 supercomputer at the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, LRZ) of the Bavarian Academy of Sciences and Humanities in Garching near Munich, Germany. It provides resources to PRACE via
the German Gauss Centre for Supercomputing (GCS).
SuperMUC-NG consists of 6 336 thin nodes (96 GB each) and 144 fat nodes (768 GB memory each), equipped with Intel Skylake processors,each node having 48 cores. All 311 040 compute cores together, connected by an Intel OmniPath Interconnect Network with a fat tree network topology, deliver a peak performance of 26.9 PFlop/s.
The parallel filesystem (IBM Spectrum Scale, GPFS) has a capacity of 50 PByte with 500 GByte/s I/O bandwidth.
For Long Term Data Storage 20 PByte capacity with 70 GByte/s bandwidth are available. The programming environment is Linux (SLES12 SP3), Intel Parallel Studio and OpenHPC. An OpenStack Compute Cloud is attached to SuperMUC-NG.
SuperMUC-NG is cooled with hot water of up to 50 centigrade. The heat removal efficiency is 97 %.
An Energy Aware Scheduling system further assists in saving energy. Adsorption chillers reuse the waste heat to generate cooling for other components.
The LINPACK performance of SuperMUC-NG was measured to be 19.5 PFlop/s, positioning SuperMUC-NG as number 8 on November 2018 world’s TOP500 list of supercomputers.

For technical assistance: lrzpost@lrz.de or https://servicedesk.lrz.de/?lang=en