Updates of Member Systems

PRACE Members are invited to send information on their new systems as soon as it becomes available to:

Marjolein Oorsprong
Communications Officer
PRACE aisbl
Rue du Trône 98
1050 Bruxelles, BELGIUM
Tel.: +32 2 613 09 27
E-mail: M.Oorsprong[at]staff.prace-ri.eu

France – (June 2018)
JOLIOT CURIE of GENCI, located in France at the Très Grand Centre de Calcul (TGCC) operated by CEA near Paris. The successor of Curie is JOLIOT CURIE with a first tranche starting service Q2 2018, is a Atos/BULL Sequana system X1000 based on a balanced architecture with 2 compute  partitions:
SKL:
• 1 656 Intel Skylake 8168 dual processors nodes 2,7 GHz, 24 cores 79 488 cores for 6,86 PFlop/s peak
• 192 GB of DDR4 memory / node
• InfiniBand EDR interconnect
KNL:
• 1 656 Intel Skylake 8168 dual processors nodes 2,7 GHz, 24 cores 79 488 cores for 6,86 PFlop/s peak
• 192 GB of DDR4 memory / node
• InfiniBand EDR interconnect
Access to a 500 GB/s multi level Lustre filesystem. Please find further information here.

Germany – (June 2018)
JUWELS is a milestone on the road to a new generation of ultraflexible modular supercomputers targeting a broader range of tasks – from big data applications right up to compute-intensive simulations. With its first module alone, JUWELS qualified as the best German computer for the TOP500 List of the fastest supercomputers in the world published today.The system is being financed and utilized within the framework of the Gauss Centre for Supercomputing, which is funded by the federal government and the states in which the centre is located. The Cluster module, which was supplied in spring 2018 by French IT company Atos in cooperation with software specialists at German enterprise ParTec, is equipped with Intel Xeon 24-core Skylake CPUs and excels with its versatility and ease of use. It has a theoretical peak performance of 12 petaflop/s, which is equivalent to the performance of 60,000 state-of-the-art PCs. The nodes are connected to a Mellanox InfiniBand high-speed network

Bulgaria – (November 2015)

The new heterogeneous supercomputer was deployed at IICT-BAS since the end of 2015. The system is one of the prototypes of heterogeneous Tera and Peta FLOPS supercomputers. Its architecture is Loosely Coupled Systems with Strong Integrated Nodes. The name of the system, deployed by HP, is AVITOHOL. It consists of 150 HP Cluster Platform SL250S GEN8 servers. Each compute node had 2 Intel Xeon CPU E5-2650 v2 @ 2.6 GHz (8 cores each), 64GB RAM and 2 Intel Xeon Phi 7120P coprocessors (with 61 cores each). Thus, in total there are 20700 cores. Additionally, there are 4 I/O nodes to serve the 96 TB of disk storage. Nodes are interconnected by InfiniBand 56 Gbps FDR Non-blocking Fat Tree Network AVITOHOL currently has the following capabilities: -theoretical peak performance of 412.32 TFlop/s; -LINPACK performance of 264.2 TFlop/s -total memory 9600 GB and total disk storage of 96 TB. It is ranked at 389th place on the November 2015 Top 500 list (http://www.top500.org)

Germany – (October 2015)
The new Hazel Hen-system at the High Performance Computing Center Stuttgart (HLRS) is powered by the latest Intel Xeon processor technologies and the CRAY Aries Interconnect technology leveraging the Dragonfly network topology. The installation encompasses 41 system cabinets hosting 7,712 compute notes with a total of 185,088 Intel Haswell E5-2680 v3 compute cores. Hazel Hen features 965 Terabyte of Main Memory and a total of 11 Petabyte of storage capacity spread over 32 additional cabinets hosting more than 8,300 disk drives which can be accessed with an input-/output rate of more than 350 Gigabyte per second.

Finland – (September 2014)
CSC’s Sisu has undergone a major upgrade during August. The size of the system has almost quadrupled and the processors have been upgraded to Intel Haswell server processors. Sisu’s theoretical peak computing power has increased to 1.7 Petaflops

Czech Republic – (October 2013)
The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving over 94 Tflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected by fully non-blocking fat-tree Infiniband network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators.

France – (June 2012)
A new Tier-1 system designed by IBM at IDRIS was announced: a BG/Q system with 800 TFlop/s and a Sandy Bridges based system of 230 TFlops coupled together.

Germany – (June 2012)
The SuperMUC supercomputer was installed at LRZ, and the Linpack results were published at ISC’12. The system will be available for PRACE users from September 2012 as scheduled in the 4th regular call.

A new BG/Q system was purchased at JSC, already running and in production.

Netherlands – (January 2013)
SURFsara has selected Bull for the delivery of the new Dutch national supercomputer. The new Bull system will become operational in the first half of 2013 and it is expected that by June 2013 the system will provide approximately 270 teraflops peak performance:

  • 32 fat nodes with each four 8-core Intel Sandy Bridge processors, 256 GB of memory and IB as interconnect.
  • an undisclosed number of thin nodes with Intel Ivy Bridge processors, 64 GB of memory and IB as interconnect.

Starting the second half of 2014, the system will be extended with Intel Haswell based thin nodes and the system will surpass the petaflops frontier.

Share: Share on LinkedInTweet about this on TwitterShare on FacebookShare on Google+Email this to someone