The 2013 PRACE Scientific Conference will be held on Sunday, 16 June, in Leipzig, Germany from 9:00 – 18:00 at the Congress Center Leipzig, Hall 4.
Top European scientists present results and advances in large scale simulations obtained with support of PRACE, the Partnership for Advanced Computing in Europe. The European HPC Strategy implementation is presented by Konstantinos Glinos, head of the European Commision e-Infrastructure unit. PRACE services for Science and Industry are highlighted by Sergi Girona, Chair of the PRACE Board of Directors.
PRACE scientific and industry partners are cordially invited to participate, discuss and learn from colleagues about services available and indentify opportunities for future PRACE supported projects.
After the sessions PRACE invites you to a get-together.
PRACE partners again with ISC, the International Supercomputing Conference, maximizing the value for PRACE Scientific Conference and ISC’13 attendees. The PRACE Scientific Conference fee is 40 Euros. Participants are kindly asked to register via the ISC registration.
The conference takes place at the Congress Center Leipzig, Hall 4. For more information please contact praceday2013-oc (at) fz-juelich.de.
|09:15||High Performance Computing: implementing the strategy |
Kostas Glinos, Head of Unit, eInfrastructures, European Commission
The most powerful computational infrastructure is needed to address scientific and societal grand challenges, like understanding the human brain or climate change, as well as for industry to innovate in products and services. Further, many technologies developed for leading-edge HPC find their way to consumer products within 5 years or so, with evident benefits for those that developed them. The most advanced countries in the world recognise this strategic role of HPC and have announced ambitious plans for building and deploying state-of-the-art supercomputers.
Europe has the technological know-how and market size to play a leading role in all areas: HPC technologies and systems, services and applications. Recognising the importance of HPC, the European Commission published its HPC Strategy on 15 February “High Performance Computing: Europe’s place in a Global Race”. This is an integrated strategy that combines three elements: (a) developing the next generations of HPC towards exascale; (b) providing access to the best facilities and services for both industry and academia; and (c) achieving excellence in applications. These three elements are not independent and should work in synergy.
The Commission has proposed an ambitious programme reflecting the Union’s support to research and innovation for the coming years. This programme covers Research, Technological development, demonstration and innovation for the 7 year period (2014-2020). We expect that support to the HPC Strategy will be reflected in the Horizon 2020 work programme.
|09:45||PRACE: strategy for a sustainable and persistent infrastructure |
Sergi Girona, PRACE AISBL
He holds a PhD in Computer Science from the Technical University of Catalunya. In 2001, EASi Engineering was founded and Sergi became the Director of the company for Spain, and the R&D Director for the German headquarters. In 2004, he joined BSC for the installation of MareNostrum in Barcelona. MareNostrum was the largest supercomputer in Europe at that time, and it maintained this position for 3 years. Sergi was responsible for the site preparation and the coordination with IBM for the system installation. Currently, he is managing the Operations group with the responsibilities for User Support and System Administration of the different HPC systems at BSC.
The PRACE Council has approved unanimously, on February 2013, the strategy for PRACE 2.0 towards a sustainable infrastructure on HPC for Europe. The success of PRACE, offering access-service an
d support to users, organising the PRACE Advanced Training Centres, issuing access to Industry, and trying new methods of access and services for SMEs, etc, are some of the successful histories that enable to continue with a persistent infrastructure.
|10.15||PRACE ISC Award Winner:|
591 TFLOPS Multi-Trillion Particles Simulation on SuperMUC
Wolfgang Eckhardt, TU München (Germany)
Anticipating large-scale molecular dynamics simulations (MD) in nano-fluidics, we conduct performance and scalability studies of an optimized version of the code ls1 Mardyn. We present our implementation requiring only 32 Bytes per molecule, which allows us to run the, to our knowledge, largest MD simulation to date. Our optimizations tailored to the Intel Sandy Bridge processor are explained, including vectorization as well as shared-memory parallelization to make use of Hyperthreading. Finally we present results for weak and strong scaling experiments on up to 146016 Cores of SuperMUC at the Leibniz Supercomputing Centre, achieving a speed-upof 133k times which corresponds to an absolute performance of 591.2 TFLOPS.
About PRACE ISC Award
PRACE, the Partnership for Advanced Computing in Europe, awards a prize to the best paper submitted to the ISC Research Paper Sessions and the PRACE Scientific Conference in one of the following areas:
- a breakthrough in science achieved through high performance computing;
- an algorithm or implementation that achieves a significant improvement in scalability or performance;
- a novel approach to performance evaluation on a massively parallel architecture.
The PRACE Scientific Steering Committee (SSC) selected the paper to receive the PRACE ISC Award, which will be announced at the ISC Opening Session and presented during Research Papers Sessions on Monday, June 17, 2013. The winner of the PRACE ISC Award receives sponsorship for participation in a training event, or a conference relevant to petascale computing.
Session 2: 11.30 – 13.15
|11:30||Porting and optimisation of the Met Office Unified Model on Petascale architectures |
Pier Luigi Vidale , University of Reading (UK)
Since 2007 Vidale has also led the NERC High-Resolution Climate Modelling programme (previously UK-HiGEM and UK-Japan Climate Collaboration). More recently, he has become the NERC Principal Investigator for high-resolution climate modelling within the Joint Weather and Climate Research Programme, a partnership between the Met Office and NERC.
Vidale has been the Director of the NCAS Climate Modelling Summer School, held bi-annually at Cambridge, since its first edition, in 2007. In 2012 he became the co-Director (with C. Reich, MPI, Germany) of the first E2SCMS Summer School, which was held in Kos, Greece, June 2012.
We present porting, optimisation and scaling results from our work with the United Kingdom’s Unified Model on a number of massively parallel architectures: the UK MONSooN and HECToR systems, the German HERMIT and the French Curie supercomputer, part of the Partnership for Advanced Computing in Europe (PRACE). The model code used for this project is a configuration of the Met Office Unified Model (MetUM) called Global Atmosphere GA3.0, in its climate mode (HadGEM3, Walters et al., 2011, and Malcolm et al., 2010). The atmospheric dynamical core uses a semi-implicit, semi-Lagrangian scheme. The model grid is spherical (a lat/lon grid) and polar filtering is applied around the two singularities. For the configuration used on PRACE, with a horizontal grid spacing of 25km (N512) and 85 vertical levels up to 85km, we use a 10-minute time step. Initial conditions are derived from fully balanced coupled experiments at lower resolution and atmosphere/land surface perturbations are imposed using standard Met Office tools for ensemble initialisation. Initial development occurred on a NERC-MO joint facility, MONSooN, with 29 IBM-P6 nodes, using up to 12 nodes. In parallel with this activity, we have tested the model on the NERC/EPSRC supercomputer, HECToR (CRAY XE6), using 1’536 to 24’576 cores. The scaling breakthroughs came after implementing the use of hybrid parallelism: OpenMP and MPI. The N512 model scales effectively up to 12’244 cores and has now been successfully ported to PRACE TIER-0 systems (Curie and HERMIT), where it is operated in ensemble mode. Current developments include extensions to 17km and 12km grid spacing (N768 and N1024), which make use of up to 96 nodes on the new Met Office IBM-P7 system. The use of the next UM dynamical core, “EndGame”, offers scaling improvements, with good performance on twice the current amount of cores, by altering the horizontal and vertical grid stagger, as well as eliminating the need for polar filtering.
Project performed within the context of theBudapest-Marseille-Wuppertal collaboration, which is a team ofscientists from Eötvös U. Budapest, Centre de Physique Théorique (CNRS/INP and Aix-Marseille U.), Bergische Universität Wuppertal and Forschungszentrum Jülich
The stability and very existence of ordinary matter relies on the fact that neutrons are slightly more massive than protons. If the reverse were true, protons would eventually decay radioactively into neutrons, and atoms would not form. This tiny mass difference is believed to be the result of two competing effects. On the one hand, the mass of the electrically charged proton is augmented, with respect to that of the neutral neutron, by the energy carried in the electromagnetic field surrounding it. On the other hand, the mass of the neutron is enhanced because the sum of the masses of its constituents (one up and two down quarks) is larger than it is for the proton (composed of one down and two up quarks).
In this project we take a first step in incorporating these two small but important effects into the theoretical description of the interactions of up, down and strange quarks. The difficulty with computing in this theory stems from the fact that quarks interact in a highly nonlinear fashion, so much so that they cannot be isolated and are confined within particles, such as the proton and neutron, known as hadrons. The only known way to account for these nonlinear interactions systematically is to solve numerically the very complex equations of the theory of the strong interaction, quantum chromodynamics (QCD), within a computational framework known as lattice QCD. Thus we incorporate quantum electrodynamics (QED) and up-down mass-difference effects directly into large scale numerical lattice QCD computations. Using this approach, we calculate the consequences of these effects on the masses of protons, neutrons and other hadrons. We further use these techniques to determine the masses of the up and down quarks, as well as other important quantities. Up to small effects that we are presently attempting to control, this framework can be made to include all of the physics required todescribe the world of hadrons and atomic nuclei.
|12:45||Singlet physics – the missing link to precision lattice QCD |
Karl Jansen, Zeuthen, DESY and John von Neumann-Institute for Computing (Germany)
In QCD there are a number of so-called singlet quantities such as the mass of the η’-particle or the scalar quark content of the nucleon where the valence quarks interact solely through the gluons. This is in contrast to standard particles such as the pion or the proton where the valence quarks interact directly with each other. Although these singlet quantities are therefore very important to understand the strong interactions, in lattice simulations they are very hard to tackle since they show a very unfavorable signal to noise ratio. As a consequence, an enormous statistics would be needed to determine properties of singlet quantities in a quantitative manner. We show in this presentation, how algorithmic developments of this project have been able to significantly improve the situation and give results for the η’-mass, the scalar quark content and beyond.
Session 3: 14.15-15.45
|14:15||Accurate quantum chemistry calculations for chromophores in photoactive proteins |
Emanuele Coccia, Università degli Studi dell’Aquila (Italy)
Photoactive proteins regulate a wide range of biological processes, from light detection in the eyes to energy conversion in photosynthesis. The involved chromophores are usually large conjugated organic moieties of about 50-100 atoms absorbing in the visible range. The geometrical features of these molecules strongly affect their optical properties.
A valuable help in the understanding of photoactivated events is given by the comparison between experimental data (electronic and vibrational spectroscopy) and high-level quantum mechanics calculations. Density Functional Theory (DFT) methods are often not enough accurate to properly evaluate the structural properties of conjugated systems whereas the use of post Hartree-Fock techniques is limited to relatively small molecules. Quantum Monte Carlo (QMC) methods are a mature technique for the study of the electronic structure of correlated molecular systems. QMC algorithms are highly parallel in nature and thanks to the r
elatively small memory requirements also for large systems, they show excellent performances and scalability on High Performance Computing facilities.
We have performed Variational Monte Carlo (VMC) calculations for the geometry optimization of two biological chromophores: the retinal protonated Schiff base in Rhodopsin and the peridinin carotenoid in Peridinin-Chlorophyll Protein, a Light-Harvesting Complex. Structural analysis has been enriched by excited state investigations. Our results indicate that VMC is a powerful tool to tackle large chromophores in their protein environment, opening new perspectives on accurate ab initio studies in photoreceptors, beyond DFT and size-limited post Hartree-Fock approaches.
|14:45||A New DNA Structural Motif: the G-Triplex |
Vittorio Limongelli, University of Naples “Federico II” (Italy)
Structural variations from the canonical Watson-Crick double helix have specific roles in important cellular processes like DNA packaging, transcription and replication. These DNA structures are sequence-directed constituting an alternative layer of the genetic code. Therefore, revealing new DNA structural motifs provides the molecular bases to elucidate novel cellular functional mechanisms and the way to interact with them. Combining advanced computations and experiments, we have identified a new DNA structural motif, named “G-triplex”. G-triplex can be formed in guanine-rich regions of the genome and is characterized by the formation of G:G:G triad planes stabilized by Hoogsteen-like hydrogen-bonds. This is the first time that DNA is found to assume this topology highlighting once more the high polymorphism of the DNA polymers. The abundance of guanine-rich regions in the genome and the possibility to exploit G-triplex structures in the development of new therapeutic agents, make this discovery of great interest for future studies.
|15:15||The molecular bases of the transport cycle of APC anti porters |
Modesto Orozco, Institute for Research in Biomedicine, Barcelona (Spain)
|Professor of Biochemistry and Molecular Biology. University of Barcelona||2000-|
|Principal Investigator. Institute for Research in Biomedicine Barcelona (IRB)||2004-|
|Director Department of Life Sciences. Barcelona Supercomputing Center (BSC)||2005-|
|Director Joint IRB-BSC Program on Computational Biology||2006-|
|Associated Professor of Biochemistry and Molecular Biology. University of Barcelona||1991-2000|
|Associated Researcher. Department of Chemistry. Yale University.||1991-93|
Amino acids cross cell membranes with the mediation of amino acid transporters. At least 8 families of amino acid transporters are present in mammals. One of these families correspond to the Heteromeric Amino acid Transporters (HAT), with light subunits acting as the catalytic moiety of HAT, which belong to the prokaryotic and eukariotic LAT subfamily within the APC (Amino acids, Polyamines and organoCations) tsuperfamily of transporters. Recent structural developments have shown that APC transporters (AdiC and ApcT) share the same protein fold with sequence unrelated transporters from 4 protein families (e.g., LeuT, vSGLT, Mhp1 and BetP).
AdiC transporter exchanges extracellular arginine for intracellular agmantine, therefore acting as a virtual proton pump. The molecular basis of the transport cycle of AdiC is a fundamental question in the transport field. Among the “5-5 inverted repeat” fold transporters with atomic structure solved is the only with an obligatory mechanism of exchange (antiporter), whereas the others are Na+ or H+ coupled transporters (11). Experimental evidence suggests that the binding of a single molecule of substrate is necessary and sufficient to trigger the conformational changes that results in substrate translocation through AdiC.
As the system consists of a large number of atoms ( 300000), the use of the Tier-0 computer resources provided by PRACE was essential in order to tackle this challenging problem. In the talk, the results of our calculations performed in order to derive an atomistic mechanism connecting conformational changes with the movement of the amino acid substrate along a putative translocation pathway in AdiC antiporter, will be reported and discussed.
Session 4: 16.15-17.15
|16:15||Three-dimensional Simulations of Thermonuclear Supernova Explosions |
Ivo Seitenzahl, University of Würzburg and Max-Planck-Institut für Astrophysik (Germany)
In 2008 Ivo returned to Germany for a PostDoc position at the Max-Planck-Institute for Astrophysics (MPA) in Garching, where he continued utilizing HPC resources for three-dimensional hydrodynamic simulations of thermonuclear supernova explosions within the group of Prof. Dr. Friedrich Roepke. Since 2012 he is working for the graduate school “theoretical astrophysics and particle physics” at the University of Würzburg.
In 2011 the Nobel Prize in Physics was awarded to Perlmutter, Riess, and Schmidt “for the discovery of the accelerating expansion of the Universe through observations of distantsupernovae”. These supernovae are thought to be the thermonuclear explosion of a white dwarf star. In spite of their importance to Cosmology, however, the exact nature of the progenitor systems or the explosion mechanism are still unknown. One of the leadingmodels amongst theorists, the delayed-detonation of a massive carbon-oxygen white dwarf star,involves a the complex interplay of explosive nuclear fusion and turbulent hydrodynamics. To shed light on the viability of this explosion scenario, utilization of HPC resources is therefore paramount. I will report on our three-dimensional hydrodynamical and radiative transfer simulations of the explosion process and predicted observables.
|16:45||Structure and evolution of an active region on the Sun |
Hardi Peter, Max Planck Institute for Solar System Research (Germany)
The Sun and other cool stars are surrounded by a hot outer atmosphere. While the heating of this million degrees hot corona is clearly related to the magnetic field of the host star, our understanding of the structure and dynamics of the corona that forms in response to the magnetic activity is still limited. In close interaction with the analysis of solar observations numerical experiments provide a pivotal tool to unveil the mechanisms that govern the processes sustaining the hot corona. Using large-scale magneto-hydrodynamic models we can describe how the magnetic field and the plasma in the solar atmosphere interact with each other and synthesize the expected coronal emission. This allows a direct comparison to actual observations on the Sun and thus provides a crucial test for the underlying physics. In a particular example we describe the coupling of a model of the convection in the solar interior with the dynamics in the corona that allows us to study in unprecedented detail the evolution of a newly emerging active region on the Sun. This simulation enables us to understand when, where and why the observed coronal structures do form.