PRACE projects being supported on JUGENE (IBM Blue Gene/P) at FZJ, Jülich, Germany (in alphabetical order of project leaders)
Simulation of electron transport in organic solar cell materials
Project leader: Jochen Blumberger, University College London, London, UK
Collaborators: Harald Oberhofer, University of Cambridge, Cambridge, UK
Organic solar cells are envisaged as a promising alternative to silicon based solar cells. They are cheap and easy to produce, light and flexible, and easily deployed on windows, walls or roofs. However, their small light-to-electricity conversion efficiencies and limited durability have prevented widespread use and commercialization so far. One reason for the low conversion efficiencies are (among others) recombinations of the photogenerated electrons with the holes before they reach the electrode.
Here we propose to carry out massively parallel computations to achieve a step change in our microscopic understanding of electron transport in organic solar cell materials. To this end we will use a dedicated electronic structure method that we have recently implemented in the Car-Parrinello molecular dynamics (CPMD) code. The latter is well parallelized, based on MPI, and can take advantage of petascale architectures.
We will focus on the characterization of the electron transport properties of fullerene derivatives that form the n-type semiconducting layer in current organic solar cell devices. This involves the following steps: (i) Calculation of the microscopic Marcus parameter (electronic coupling, driving force and reorganization free energy) for electron transfer between individual fullerene molecules in a nanocrystal. (ii) Using the data generated in (i) we compute the experimentally observable electron mobilities based on a hopping model for electron transfer. Steps (i) and (ii) will be carried out (a) for different external electric field strengths and (b) for unmodified (C60) and chemically modified fullerenes.
Advancing the state-of-the-art
Previous calculations of Marcus parameter were typically carried out in the gas phase, neglecting polarization effects due to the condensed phase environment. The latter are considered to be important, though, especially for the polarizable organic solar cell materials considered here. In previous work we have implemented a special method termed constrained density functional theory in the CPMD code, allowing us to calculate the Marcus parameter for condensed phase environments. However, such most realistic calculations require large unit cells pushing the number of valence electrons to be explicitely included in the computations to about 5000. Calculations at this level will provide benchmark data of unprecedented quality but they are feasible only on tier-0 resources.
Resource awarded: 24 660 000 core-hours
Excess proton at water/hydrophobic interfaces: A Car-Parrinello MD study
Project leader: Paolo Carloni, German Research School for Simulation Sciences GmbH, Jülich, Germany
Collaborators: Emiliano Ippoliti, German Research School for Simulation Sciences GmbH, Jülich, Germany / Yana Vereshchaga, German Research School for Simulation Sciences GmbH, Jülich, Germany
Recent experimental evidence shows unambiguously that, at room conditions, excess protons are located close to hydrophobic surfaces in liquid mixtures, in contrast to intuitive considerations. Shedding light on structural and energetic facets of this issue is crucial to describe correctly key biochemical processes such as protein folding and ligand/target interactions. So far, approaches have been mostly used classical modeling or empirical quantum-mechanical methods. The latter have suggested that the process is driven by enthalpy, which overcompensates the entropy penalty. First principle study reported so far did not address the energetics of the processes. Here we plan to perform ab initio molecular dynamics of an excess proton in the presence of a water/decane mixture as used experimentally. We plan to calculate the free energy of the process using thermodynamic integration and determine at which distance from the surface it is most probable that the proton localizes.
Resource awarded: 40 468 480 core-hours
Parallel space-time approach to turbulence: computation of unstable periodic orbits and the dynamical zeta function
Project leader: Peter Coveney, University College of London, London, UK
Collaborators: Luis Fazendeiro, University College of London, London, UK / Bruce Boghosian, Tutfs University, Massachusetts, USA
The challenge of predicting the properties of turbulent fluids is one of the most important fundamental problems still faced by current research, sometimes hailed as the last great unsolved problem from classical mechanics. It is of the utmost practical relevance in areas as diverse as weather forecasting, transport and dispersion of pollutants, gas flows in engines, blood circulation and cosmology. In this work, we intend to apply a novel methodology that efficiently utilizes petascale resources in the computation of turbulent quantities from first principles.
The algorithmic complexity of the Navier-Stokes equations (NSE), which describe fluid turbulence, typically scales as a power of a non-dimensional parameter, known as the Reynolds number. In turbulent flow the value of this quantity is often of the order of many millions, quickly invalidating the possibility of Direct Numerical Simulation (DNS) of these equations. An alternative approach, based on the modern mathematical theory of dynamical systems, which we follow in this work, is to identify the unstable periodic orbits (UPOs), from which the statistical properties of the NSE in the turbulent regime can then be computed, in an exact and systematic fashion.
The importance of these orbits to the description of turbulence has been known for some time. However, only recently, with the emergence of petascale resources, has it become feasible to use a space-time approach to compute these, due to the substantial memory requirements involved. This is a novel approach that has huge transformational potential for the area of turbulence studies and very likely for other forced dissipative dynamical systems as well. Algorithms such as these, which efficiently parallelize time as well as space, are going to be increasingly relevant to the new generation of petascale machines, as well as for the next generation of exascale computational resources.
For this purpose we have developed a fully parallel software package, called HYPO4D, which is the recipient of the TeraGrid’08 5K scalability award for its impressive parallel performance. Previous work by us, on a range of supercomputing resources in both the UK and the USA, with special emphasis on the Blue Gene/P machine at Argonne National Laboratory, has demonstrated the validity and efficiency of our app
roach, with the computation of the first UPOs in such high-dimensional systems. We now aim to extend this work in order to perform the first reasonable calculations of turbulent quantities from first principles, using this novel methodology. The largest UPO we intend to study will require in excess of 200,000 cores, making it essential for us to have access to JUGENE.
Resource awarded: 17 000 000 core-hours
QCD Thermodynamics with 2+1+1 improved dynamical flavors
Project leader: Zoltán Fodor, Bergische Universitaet Wuppertal, Wuppertal, Germany
Collaborators: Szabocs Borsányi, Bergische Universitaet Wuppertal, Wuppertal, Germany / Craig McNeile, Bergische Universitaet Wuppertal, Wuppertal, Germany / Stephan Durr, Bergische Universitaet Wuppertal, Wuppertal, Germany / Christian Hölbling, Bergische Universitaet Wuppertal, Wuppertal, Germany / Stefan Krieg, Bergische Universitaet Wuppertal, Wuppertal, Germany / Thorsten Kurth, Bergische Universitaet Wuppertal, Wuppertal, Germany / Claudia Ratti, Bergische Universitaet Wuppertal, Wuppertal, Germany / Christopher Schroeder, Bergische Universitaet Wuppertal, Wuppertal, Germany / Kálmán Szabó, Eötvös Loránd University, Budapest, Hungary / Gergely Endrödi, Eötvös Loránd University, Budapest, Hungary / Sandor Katz, Eötvös Loránd University, Budapest, Hungary / Balint Tóth, University of Pécs, Pécs, Hungary
Everyday matter is made up of atoms, and these in turn of a “cloud” of electrons “orbiting” the atomic nucleus. It is an everyday event that electrons are removed or added to this “cloud” in chemical processes, even in the human body. The atomic nuclei, however, are usually stable, certainly in chemical processes, with the exception of the occasional (natural) radioactive decay.
In the hot early universe temperatures were high enough such that no nuclei could exist, but only a “soup” of their building blocks, called quarks and gluons. This state or form of matter is generally referred to as quark-gluon plasma. As the universe, during its expansion, cooled down below a certain temperature scale, quarks and gluons combined to form e.g. protons and neutrons which in turn then combined to form the atomic nuclei that we see all around us. With our simulations we can go “back in time” to the early universe and to temperatures close to this transition.
We simulate the (quantum) theory describing the “strong” interactions between quarks and gluons, called quantum chromodynamics (QCD). In the past we have successfully used complex and demanding simulations to calculate the temperature scale, at which the above transition occurred. Now, in our new project, we will again use simulations “to go back in time” to analyze more closely the properties of strongly interacting matter under “extreme conditions”.
This is not a purely theoretical enterprise: there are several experiments all over Europe and the World, e.g. LHC at CERN (Geneva), FAIR at GSI (Darmstadt), or RHIC at BNL (Upton, New York), where this state of matter can or will be created. Our computations will then be used to help the involved physicists to understand their findings and to analyze their results.
In particular, we will determine the equation of state of the QCD matter with the inclusion of four dynamical quark flavors. This goes beyond previous studies by providing a self-consistent description of the charmed particles on sufficiently fine lattices.
Resource awarded: 63 000 000 core-hours
Ab initio Simulations of Turbulence in Fusion Plasmas
Project leader: Frank Jenko, Max Planck Institute for Plasma Physics (IPP), Garching, Germany
Collaborators: Tobias Görler, Max Planck Institute for Plasma Physics (IPP), Garching, Germany / Florian Merz, Max Planck Institute for Plasma Physics (IPP), Garching, Germany / Daniel Told, Max Planck Institute for Plasma Physics (IPP), Garching, Germany / Moritz Johannes Pueschel, Max Planck Institute for Plasma Physics (IPP), Garching, Germany / Stephan Brunner, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland / Michael Barnes, Trinity College, University of Oxford, Oxford, UK
The world-wide demand of electricity is projected to increase by about a factor of six throughout the 21st century. At the same time, the use of fossile fuels will have to be reduced. At present, we still have no silver bullet which can be used to close this gap. Fusion energy is an attractive option in this context, and high-performance simulations play a key role for its further development. The present project represents an important contribution to the European effort to employ Petascale (and later Exascale) computing for fusion energy applications. Its main goal is to use the latest version of the plasma turbulence code GENE to perform a number of millenium-type simulations which are closely linked to the international flagship fusion experiment ITER – one of the most challenging scientific projects to date and currently under construction in Southern France. The first target is to perform the first physically comprehensive simulations of the ASDEX Upgrade tokamak at Garching, presently one of the world’s leading fusion devices and conceptually very similar to ITER, although about four time smaller in the linear dimensions. The second question to be addressed is the dependence of the turbulent transport on the system size. Given that ITER is expected to benefit from the fact that it will be significantly larger than all existing fusion devices, this is a key issue to ensure the project’s success. Since the theoretical understanding of it is still far from complete, these simulations will provide very valuable insights.
Resource awarded: 50 000 000 core-hours
Providing fundamental laws for weather and climate models
Project leader: Harmen Jonker, Delft University, Delft, The Netherlands
This project aims to provide the growth-rate law for the evolution of atmospheric boundary layers. For weather, climate, and air quality models, it is of vital importance to correctly forecast the evolution of the boundary layer, which grows in time due to daytime heating and wind-shear. Both (surface) heating and wind-shear produce strong turbulence, which in turn mixes heat, momentum, and bio(chemical) species originating from the surface, over the entire depth of the boundary layer; hence, any inaccurate prediction of the boundary layer height results in flawed predictions of scalar concentrations, e.g., temperature, humidity, greenhouse gases and pollutants.
Growth-rate laws as represented in weather/climate models can be traced back to classical laboratory experiments in which the atmospheric situation was mimicked at laboratory scale using water-tank and/or wind-tunnel facilities.
This project has the ambitious goal to bring about the computational analog of the classical lab-experiments, but realizing an enormous improvement with respect to accuracy, controllability, and “size”. As such it will lay a
more solid foundation for the growth-rate laws that need be implemented in atmospheric models.
For a large part the project will build on our results obtained within the DEISA Extreme Computing Initiative project PINNACLE, which studied the shear-free (windless) atmospheric boundary layer developing under daytime heating. We follow the scientific strategy of PINNACLE by conducting ground truth Direct Numerical Simulation (DNS), a methodology that employs no empirical rules as it fully resolves the entire spatial spectrum of turbulence. However, in the proposed simulations we will study the combined impact of both wind-shear and daytime heating on the growth-rate of the boundary layer – situations that are much more realistic than the wind-free case. Although one cannot simulate the full spectrum of atmospheric turbulence, PRACE Tier-0 resources not only do allow one to faithfully mimic the classical laboratory experiments, but to improve the spectral range of turbulence by one/two orders of magnitude which is crucial for extrapolating the obtained results to the atmospheric situation.
The data gathered in this project will be made publicly available so as to serve as benchmark for atmospheric models.
Resource awarded: 35 000 000 core-hours
Plasmoid Dynamics in Magnetic Reconnection
Project leader: Nuno Loureiro, Instituto Superior Técnico, Lisbon, Portugal
Collaborators: Dmitri Uzdensky, University of Colorado, Boulder, USA / Alexander Schekochihin, University of Oxford, Oxford, UK / Ravi Samtaney, King Abdulah University of Science and Technology, Thuwal, Saudi Arabia
Magnetic reconnection is one of the most fundamental and important processes in plasmas. It governs magnetic energy dissipation and its transformation into the plasma thermal and kinetic energy, and to nonthermal particle acceleration, and is thus responsible for many of the most spectacular and violent phenomena in space and laboratory plasmas, such as solar flares, magnetic substorms in the Earth’s magnetosphere, and sawtooth crashes in tokamaks. It is also conjectured to play a key role in magnetized turbulence and large-scale dynamos. Observations of reconnection are often characterized by the presence of two very distinct timescales: a slow stage, when magnetic energy gradually accumulates, and an explosive stage, where that energy is released/converted. One of the outstanding problems in reconnection research is explaining these timescales. Another critical issue is the detailed understanding of the energy conversion mechanisms.
Most of the theoretical and numerical models of magnetic reconnection are characterized by stationary laminar configurations. However, this is valid only for relatively small systems with only a modest scale separation between the global system size and the relevant microphysical scales. In contrast, many space- and astrophysical reconnecting systems are tremendously large compared with their corresponding microphysical scales, and so the scalings obtained in small-system studies cannot be reliably extended to these systems. In fact, it was recently realised (Loureiro, Schekochihin & Cowley, Phys. Plasmas 2007; Samtaney et al., Phys. Rev. Lett. 2009) that the classical steady state reconnection solutions are themselves unstable for large enough systems. The reconnection process for such systems is then inherently non-steady and very dynamic, characterized by the intermittent formation and ejection of secondary magnetic islands, or plasmoids. The detailed investigation of magnetic reconnection in these complex plasmoid dominated regimes is the aim of this proposal. It is conjectured that not only do plasmoids strongly affect the reconnection rate and the efficiency of the energy conversion mechanisms, but also that the transition from slow to fast reconnection may be linked and enabled by plasmoid formation.
Resource awarded: 20 000 000 core-hours
A dislocation dynamics study of dislocation cell formation and interaction between a low angle grain boundary and an in-coming dislocation
Project leader: Dierk Raabe, Max-Planck-Institut für Eisenforschung, Düsseldorf, Germany
Collaborators: Franz Roters, Max-Planck-Institut für Eisenforschung, Düsseldorf, Germany / Bing Liu, Max-Planck-Institut für Eisenforschung, Düsseldorf, Germany
Discrete Dislocation Dynamics (DDD) models simulate explicitly the motion, multiplication and interaction of dislocation lines, the carrier of plasticity in crystalline materials, in response to an applied load. This project aims to study the dislocation physics during plastic deformation of metals, focusing on the microstructure evolution (cell structure formation) and the interaction between a low angle grain boundary (LAGB) and an in-coming dislocation.
To simulate the dislocation cell structure formation, both the sample volume and the amount of plastic deformation have to exceed the most computationally expensive simulations (volume: 5 µm on a side; plastic deformation 1.7%) run on Thunder and Blue Gene/L computers at the Lawrence Livermore National Laboratory.
When the wall dislocations and the dislocation loop have the same Burgers vector but on different glide planes (collinear relation), they form a junction with zero Burgers vector (partial annihilation). The collinear dislocation interaction has been reported to be stronger than all other types of dislocation interactions (self interaction, coplanar interaction, interaction leading to non-zero junctions). The finding from our simulations of the interaction between a low angle grain boundary (dislocation wall) and dislocation loops is that: the partial annihilation of wall dislocation and in-coming dislocation loop makes dislocation loop transmission easy and leads to strong interactions at the dislocation wall, and junction formation pins the dislocation loop at the dislocation wall, and strong multiplication events happen outside the dislocation wall region. Both dislocation density and plastic strain increase faster in the junction case than in the collinear case. It is important to further study how the interaction depends on the dislocation spacing of wall dislocations, or in other words the angle of the LAGB. Currently the dislocation spacing is 1000b (b is the magnitude of the Burgers vector), and the use of smaller dislocation spacing (100b) would request the number of DOF to increase 100 times (the number of dislocations x10, and the number of segments per dislocation x10).
Additionally, we would like to study the (average) migration velocity of a clean and solitary LAGB in contrast to the (messy) case when concurrent activity of numerous dislocation loops destroys the neat boundary structure. Both cases are investigated under the same applied stress. Experimental observations suggest a massive reduction in the migration rate in the latter case. Likely, the generation of low-mobility junctions slows down a disrupted boundary. The messy case (dislocation wall with massively-tangled dense dislocation network) needs the computational capability of the Blue Gene/P.
Resource awarded: 15 600 000 core-hours
Type Ia supernovae from Chandrasekhar-mass white dwarf explosions
roject leader: Friedrich Röpke, Max-Planck-Gesellschaft, Garching, Germany
Collaborators: Markus Kromer, Max-Planck-Gesellschaft, Garching, Germany / Stuart Sim, Max-Planck-Gesellschaft, Garching, Germany / Wolfgang Hillebrandt, Max-Planck-Gesellschaft, Garching, Germany
Type Ia supernovae (SNe Ia) are among the brightest explosions in the Universe. Moreover, they are quite uniform in their properties and thus from their apparent brightnesses cosmic distances can be inferred. As this allows to determine the geometry of the Universe, SNe Ia are one of the most important tools in observational cosmology. Distance determinations to SNe Ia showed that the Universe is expanding at an accelerated rate. The reason for this effect is unclear and has been parametrized as “Dark Energy” forming the main constituent of the Universe today. SN Ia distance determination can contribute to a better understanding of this mysterious energy form. The necessary distance measurements, however, require high precision and great observational effort is spent in this field. Therefore it is timely to match these efforts with a sound theoretical understanding of the supernova explosions.
Shedding light on the physics of the explosion mechanism is the goal of the project. Over the past years, detailed astronomical observations have revealed that SNe Ia are not as uniform as assumed previously. It seems that different sub-classes exist among them. This not only requires a detailed understanding in order to improve the accuracy of distance determination, but may also be a key to understanding the explosion process in general.
The leading scenario of SNe Ia is that a white dwarf star consisting of carbon and oxygen undergoes a thermonuclear explosion when it reaches the limit of its stability — the Chandrasekhar mass, about 1.4 solar masses. This explosion turns the material of the white dwarf star into heavier elements, predominantly nickel-56, which by radioactive decay powers the bright optical display we observe. The details of the explosion physics and the formation of the astronomical observables, however, are complex and can only be studied in sophisticated numerical simulations. The thermonuclear burning ignites near the center of the star and propagates outward as a thin reaction front. This front propagates initially with subsonic velocities (then called a deflagration) but may eventually turn into a supersonic detonation. Such delayed detonations have been shown to reproduce SNe Ia well, but it is not clear whether the transition actually occurs in nature. If it does, one may expect that some objects fail to trigger a detonation and the explosion proceeds entirely in the subsonic deflagration mode. This is expected to result in specific observational features, and indeed a sub-class of Type Ia supernovae has been suggested to result from this scenario. However, a proof of this hypothesis is still missing.
With detailed three-dimensional hydrodynamic simulations and subsequent radiative transfer simulations we aim at answering this question. Our models will help to better understand a particular sub-class of Type Ia supernovae and, in addition, provide insights into the general physical mechanism of these astronomical events.
Resource awarded: 23 600 000 core-hours
QCD Simulations for Flavor Physics in the Standard Model and Beyond
Project leader: Vittorio Lubicz, INFN, Rome, Italy
Collaborators: Guido Martinelli, Univerisity of Rome “La Sapienza”, Rome, Italy / Giancarlo Rossi, University of Rome “Tor Vergata”, Rome, Italy / Roberto Frezzotti, University of Rome “Tor Vergata”, Rome, Italy / Cecilia Tarantino, University of Rome III, Rome, Italy / Federico Mescia , Universitat de Barcelona, Barcelona, Spain / Benoit Blossier, Universite de Paris XI, Orsay, France / Vicent Gimenez Gomez, Universitat de Valencia, Burjassot, Spain / Olivier Pene, Universite de Paris XI, Orsay, France / Philippe Boucaud, Universite de Paris XI, Orsay, France / Dimopoulos Petros, Univerisity of Rome “La Sapienza”, Rome, Italy / Karl Jansen, DESY, Zeuthen, Germany / Carsten Urbach, Helmholtz-Institut, Bonn, Germany / Federico Farchioni, Universitaet Muenster, Muenster, Germany / Marc Wagner, Humboldt Universitaet zu Berlin, Berlin, Germany / Elisabetta Pallante, University of Groningen, Groningen, The Netherlands / Jaume Carbonell, UJF/CNRS/IN2P3, Grenoble, France / Constantia Alexandrou, University of Cyprus, Nicosia, Cyprus / Christopher Michael , University of Liverpool, Liverpool, UK
The past decade has seen a remarkable progress in the study of flavor and CP violation physics. B-factories together with Tevatron have collected and analyzed an impressive amount of experimental data, that led to the confirmation of the Cabibbo-Kobayashi-Maskawa (CKM) mechanism for flavor and CP violation. The bulk of CP violation in the K and B sectors can be correctly accounted for within the Standard Model (SM), with possible new sources of flavor and CP violation being confined to a 20-30 % correction at the amplitude level or even more if New Physics (NP) phases are aligned to those of the SM. New very interesting perspectives will be offered by the forthcoming LHCb experiment at CERN, which may become the primary source of indirect NP signals, and by the advent of Super-B factories which are expected to represent a gold mine for the determination of NP flavor couplings.
The extraction from experiments of useful phenomenological information on the SM and/or NP fundamental parameters requires in several cases an accurate knowledge of the relevant hadronic matrix elements of the effective weak Hamiltonian. In this respect Lattice QCD (LQCD) represents an ideal framework where such calculations can be performed with systematic errors well under control.
In our project we want to make use of the gauge configurations with four flavors of dynamical quarks produced or under production by the European Twisted-Mass Collaboration (ETMC). Thanks to such configurations all quenching effects for the light, strange and charm sectors will be removed. This will allow to reach an unprecedented accuracy in LQCD calculations.
Our goals are the precise determinations of:
- the light, strange, charm and bottom quark masses;
- the decay constants of K-, D- and B-mesons;
- the vector and scalar form factors of the semileptonic decays of K-, D- and B-mesons relevant for the determination of various entries of the CKM matrix;
- the vector and axial form factors of the semileptonic decays of hyperons relevant for the extraction of the Cabibbo angle;
- the bag parameters of the K-barK, D-barD and B-barB mixings relevant for the study of CP violation in the SM as well as in NP scenarios;
- the hadronic matrix elements of the electromagnetic and chromomagnetic operators, which are particularly important for supersymmetric extensions of the SM;
- the electric dipole moment (EDM) of the neutron, which is the key hadronic quantity for the strong CP problem as well as for CP violation mechanisms in various NP scenarios;
- the sigma-term of the nucleon relevant for the spin-independent neutralino-nucleon scattering cross section;
- the matrix elements of the effective Hamiltonian describing at low energies the proton decay in Grand Unified Theories.
For the calculation of the above observables we will adopt new pow
erful algorithms, like the stochastic approach for the evaluation of all-to-all propagators or the implementation of non-periodic boundary conditions to inject arbitrary values of momenta on the lattice.
We plan to use the non-perturbative RI-MOM determinations of all the relevant renormalization constants, which require the production of dedicated gauge ensembles with four light mass-degenerate quarks. Such a renormalization study is already in progress by the ETMC.
In the cases of the neutron EDM and of the nucleon sigma-term the evaluation of (fermionic) disconnected diagrams is required. Such diagrams are well known to be very noisy on the lattice. We plan to use suitable stochastic procedures, which however require high-performance computing machines due to the large number of stochastic sources used for the all-to-all propagator inversions.
Resource awarded: 35 000 000 core-hours