PRACE Resources

The PRACE RI provides access to distributed persistent pan-European world class HPC computing and data management resources and services. Expertise in efficient use of the resources is available through participating centers throughout Europe.
Available resources are announced for each Call for Proposals.

PRACE Production systems (in alphabetical order of the systems’ names):


GIF - 41.4 kb

CURIE is a supercomputer of GENCI, located in France at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris. CURIE is a BULL x86 system based on a balanced architecture of  5040 thin nodes (each with 2 sockets based on Intel SandyBridge 8-core processors) with more than 320 TB of distributed memory and 15 PB of shared disk through Lustre, which can be accessed with an input-/output rate of more than 250 Gigabyte per second.  CURIE delivers a peak performance of 1.75 Petaflop/s (1,75 million billion operations a second).

For technical assistance:


Italian supercomputer systems complement the PRACE infrastructure from spring 2012.

CINECA’s new Tier-0 system named MARCONI provides access to PRACE users since July 2016. The MARCONI system is equipped with the new Intel Xeon processors and it has two different partitions:

  • The first partition is based on the Lenovo NeXtScale architecture and is equipped  with Intel Xeon processor E5-2600 v4  (Broadwell). It consists of 1512 nodes with 2 Intel® Broadwell processors ( 2.3GHz, 36 cores per node) and 128 GB of DDR4 ram per node.
  • The second partition (available on Q4 2016) is based on the Lenovo Adam Pass architecture and is equipped with the new Intel Knights Landing BIN1 processors (KNL). It consist of 3600 nodes (1 KNL processor at 1.4GHz  and 96 GB of DDR4 ram per node). Each KNL is equipped with 68 cores and 16 GB of MCD RAM.

The entire system is connected via the Intel OmniPath network. The global peak performance of  the Marconi system is 13 PFlop/s. In Q3 2017 the MARCONI Broadwell partition will be replaced by a new one based on Intel Skylake processors and Lenovo Stark architecture, reaching a total computational power in excess of 20 PFlop/s.

For technical assistance:

Hazel Hen

Hazel HenHLRS’s new Hazel Hen-system is powered by the latest Intel Xeon processor technologies and the CRAY Aries Interconnect technology leveraging the Dragonfly network topology. The installation encompasses 41 system cabinets hosting 7,712 compute notes with a total of 185,088 Intel Haswell E5-2680 v3 compute cores. Hazel Hen features 965 Terabyte of Main Memory and a total of 11 Petabyte of storage capacity spread over 32 additional cabinets hosting more than 8,300 disk drives which can be accessed with an input-/output rate of more than 350 Gigabyte per second. For technical assistance:



JPEG - 51.4 kb

Since 1 November 2012, the Gauss Center for Supercomputing provides access to an IBM Blue Gene/Q system JUQUEEN at Forschungszentrum Jülich (FZJ) in Jülich, Germany. Systems of this type are currently the most energy-efficient supercomputers according to the Green 500 List. JUQUEEN has an overall peak performance of 5.87 Petaflop. It consists of 28 racks; each rack comprises 1024 nodes (16394 processing cores). The main memory amounts to 458 TB. More information is available at JUQUEEN’s home page (

For technical assistance:


JPEG - 844 kb

IBM System X iDataplex – MareNostrum – hosted by BSC in Barcelona, Spain.

MareNostrum is based on Intel Sandy Bridge EP processors 2,6 GHz (eight–core), 2 GB/core (32 GB/node) and around 500 GB of local disk acting as local /tmp. All compute nodes are interconnected through an Infiniband FDR10 network,with a non-blocking fat tree network topology and a peak performance of 1 Petaflop/s.

MareNostrum is also providing a rack with Intel MIC accelerators. The configuration per MIC node is:

  • 2 Intel Xeon CPU E5-2670 @ 2.60GHz processors (8 cores/processor)
  • 64 GB of RAM memory
  • 2 Intel Xeon Phi 5110P

For technical assistance:

Piz Daint

Piz Daint sinistra persona 3_webPiz Daint supercomputer is a Cray XC30 system and the flagship system at CSCS –
Swiss National Supercomputing Centre, Lugano.
Piz Daint has a computing power of 7.8 Petaflops, this means 7.8 quadrillion of mathematical operations per second. Piz Daint can compute in one day more than a modern laptop could compute in 900 years.
This supercomputer is a 28 cabinet Cray XC30 system with a total of 5 272 compute nodes. The compute nodes are equipped with an 8-core 64-bit Intel SandyBridge CPU (Intel® Xeon® E5-2670), an NVIDIA® Tesla® K20X with 6 GB GDDR5 memory, and 32 GB of host memory. The nodes are connected by the “Aries” proprietary interconnect from Cray, with a dragonfly network topology. Please visit for further information the CSCS website .

For technical questions: help(at)


JPEG - 68.7 kbSuperMUC is the Tier-0 supercomputer at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. It provides resource to PRACE via the German Gauss Centre.
The system is an IBM System x iDataPlex based on Intel Sandy Bride EP processors. SuperMUC has a peak performance of 3.2 PFLOP/s consisting of 18 islands, each combining 512 compute nodes with 16 physical cores and 32 GB per node. The nodes are connected by a non-blocking fat tree, based on Infiniband FDR10. Additionally, an island with 205 nodes and 256 GB per node is available. With its innovative warm water cooling system, SuperMUC is one of the most energy efficient supercomputers in the world. For parallel I/O, SuperMUC provides 10 PByte of storage with the GPFS file system from IBM. The batch queueing system LoadLeveler provides novel energy-tags to adjust the CPU clock speed to optimize the “energy to solution”.

For technical assistance: or

Share: Share on LinkedInTweet about this on TwitterShare on FacebookShare on Google+Email this to someone