Statistics

The Council of PRACE has unanimously approved the publication of various PRACE user statistics. The tables and graphs below represent the awarded projects per country, and the awarded hours per country. PRACE will update these statistics periodically, adding information on each successive Call for Proposals as it becomes available.

Last update: October 2016

Statistics

The graph below shows the following data per country:

  • Projects awarded % of total projects awarded (blue bars)
  • Resources awarded % of total resources (orange bars)

The graph includes data from the Early Access Call (EAC) as well as from PRACE Project Access Calls 1 through 13.

stats-chart-2016-10

 

Awarded projects per country

To attribute a PRACE awarded project to a certain country, a “mixed metric” is used:

  • The nationality of the principal investigator (meaning the country of his/her research centre) counts for 30%
  • The countries of all the centres participating in the project count for 70%
  • Each centre counts for an equal share of the 70%. If two departments of the same university participate, they count as two centres.

An example

A project is led by a PI from Germany. There are 3 collaborating centres, one from Spain, one from United Kingdom and one from United States. The 4 participating countries are counted as follows:

CountryCountTotal
Germany30% for the PI
0.25 of 70% for the collaboration
47,5%
Spain0.25 of 70% for the collaboration17,5%
UK0.25 of 70% for the collaboration17,5%
USA0.25 of 70% for the collaboration17.5%

These calculations are made for each project. The results per country are added up and divided by the total number of awarded projects.

Awarded core hours per county

The awarded core hours per country are counted according to the Linpack benchmark for each system. The following steps are taken to obtain this percentage:

1) The Linpack benchmark for each system is calculated via the equation:

2) Second, the value of the Linpack benchmark is multiplied with the total core hours awarded to the project on each system (this can be only one system or several):

3) The outcome of this calculation is then multiplied with the percentage for the system’s hosting country (obtained via the “mixed metric”) to arrive at the performance per system.

4) The performances in each project for each system are then added up and divided by the total awarded hours for all systems (grand total).

Please note that the PRACE Council has appointed a working group to develop a benchmark with a balanced set of Tier-0 applications, with real scientific user data. This new benchmark will replace the Linpack benchmark, which is application-naïve, does not represent scientific code performances and is subject to overclocking and accelerators.

Disclaimer and copyright

PRACE publishes the above statistics under the principle of transparency of its Peer Review Process. Re-publication of these statistics is allowed as long as the source is correctly referenced. PRACE cannot take responsibility for errors and omissions in quotations by third parties.

Contact

If you wish to obtain further information about these statistics or the PRACE Peer Review Process, please contact the PRACE Board of Directors via bod[at]prace-ri.eu.

Share: Share on LinkedInTweet about this on TwitterShare on FacebookShare on Google+Email this to someone