Understanding baryon structure with QCD simulations with light, strange and charm dynamical quark flavours
Project leader: Constantia Alexandrou, University of Cyprus, Cyprus
Collaborators: Giancarlo Rossi, Roberto Frezzotti, Universita di Roma Tor Vergata, Italy / Karl Jansen,Vincent Drach, DESY, Germany / Jaume Carbonell, Laboratoire de Physique Subatomique et Cosmologie, France / Elisabetta Pallante, University of Groningen, The Netherlands / Chris Michael, University of Liverpool, UK / Carsten Urbach, University of Bonn, Germany / Giannis Koutsou, The Cyprus Institute, Cyprus / Pierre Guichon, Universite de Paris XI, France / Marc Wagner, Humboldt Universitaet zu Berlin, Germany
The ab initio solution of Quantum Chromodynamics (QCD), the theory that describes the strong force that binds quarks into hadrons and nucleons into nuclei, requires large computational resources. Understanding hadron structure from first principles constitutes a milestone of Hadron Physics and in this project we will calculate key observables that reveal this structure. There is a rich experimental program at major accelerator facilities world-wide that aims at measuring such key observables and therefore the current project nicely complements experiment in the quest of understanding the properties of the bulk of visible matter in our universe.
In particular, our ability to predict new phenomena beyond the standard model (SM) of elementary particle physics now under search at LHC, relies in part in our reliable calculation of known experimental quantities. This forms the basis of the current project. Namely we aim at calculating well measured quantities that include the nucleon axial charge and moments of generalized parton distributions close to the physical point in the continuum limit providing a comparison to the experimental results. Resolving the observed discrepancy between lattice calculations and the measured values of these key quantities will open the way for reliable calculation of other difficult to measure quantities such as the axial charge of hyperon and charm baryons, which can be obtained in an optimal way within the formulation of the current proposal.
Baryon structure calculations in this project will be done using dynamical twisted mass configurations of two light degenerate quarks, a strange and a charm quark both fixed to their physical values. These gauge configurations, that provide the most complete description of the QCD vacuum to date, will be made available in time to be utilized as the project progresses. The generation of these configurations is being done using computer resources secured beyond the current project. Therefore this project highly leverages national resources used for the generation of these configurations providing the required computer time for their analysis and the extraction of important physics results thus enabling the optimal utilization of these gauge configurations. To accomplish our goal of having precise calculations at the physical point requires large computer resources as foreseen by PRACE.
The project is embedded in the European Twisted Mass Collaboration which consists of leading scientists across Europe with expertise in all aspects of lattice QCD calculations.
Resource awarded: 38 028 000 core-hours at JUGENE, GCS@Juelich
Pushing the Strong Interaction past its Breaking Point: QCD in the quark-gluon plasma phase
Project leader: Chris Allton, Swansea University, UK
Collaborators: Gert Aarts, Simon Hands, Swansea University, UK / Michael Peardon, Sinead Ryan, Trinity College, Dublin, Ireland / Jonivar Skullerud, National University of Ireland, Ireland
There are four fundamental forces that describe all known interactions in the universe: gravity; electromagnetism; the weak interaction (which powers the sun and describes most radioactivity); and, finally the strong interaction – which is the topic of this research. The strong interaction causes quarks to be bound together in triplets into protons and neutrons, which in turn form the nucleus of atoms, and therefore make up more than 99% of all the known matter in the universe. If there were no strong interaction, these quarks would fly apart and there’d be no nuclei, and therefore no atoms, molecules, DNA, humans, planets, etc.
Although the strong interaction is normally an incredibly strongly binding force (the force between quarks inside protons is the weight of three elephants!), in extreme conditions it undergoes a substantial change in character. Instead of holding quarks together, it becomes considerably weaker, and quarks can fly apart and become “free”. This new phase of matter is called the “quark-gluon” plasma. This occurs at extreme temperatures: hotter than 10 billion Celsius. These conditions obviously do not normally occur – even the core of the sun is one thousand times cooler! However, these temperatures do occur naturally in at least two places: inside neutron stars (which are the small, but very dense remnants of supernovae) and just after the Big Bang (when the universe was a much hotter, smaller and denser place than it is today).
As well as in these situations in nature, physicists can re-create a mini-version of the quark-gluon plasma by colliding large nuclei (like gold) together in a particle accelerator at virtually the speed of light. This experiment is currently being performed at the Large Hadron Collider in CERN. Because each nucleus is incredibly small (100 billion of them side-by-side would span a distance of 1mm) the region of quark-gluon plasma created is correspondingly small. The plasma “fireball” also expands and cools incredibly rapidly, so it quickly returns to the normal state of matter where quarks are tightly bound. For these reasons, it is incredibly difficult to get any information about the plasma phase of matter.
To understand the processes occurring inside the fireball, physicists need to know its properties such as viscosity, pressure and energy density. It is also important to know at which temperature the quarks inside protons and other particles become unbound and free. With this information, it is possible to calculate how fast the fireball expands and cools, and what mixture of particles will fly out of the fireball and be observed by detectors in the experiment.
This research project will use supercomputers to simulate the strong interaction in the quark-gluon phase. We will find the temperature that quarks become unbound, and calculate some of the fundamental physical properties of the plasma such as its conductivity and viscosity. These quantities can then be used as inputs into the theoretical models which will enable us to understand the quark-gluon plasma, i.e. the strong interaction past its breaking point.
Resource awarded: 22 740 000 core-hours at JUGENE, GCS@Juelich
Physics of the Solar Chromosphere
Project leader: Mats Carlsson, University of Oslo, Norway
Collaborators: Boris Gud iksen, Viggo Hansteen, University of Oslo, Norway / Jorrit Leenaarts, Utrecht University, The Netherlands
This project aims at a breakthrough in our understanding of the solar chromosphere by developing sophisticated radiation-magnetohydrodynamic simulations.
The enigmatic chromosphere is the transition between the solar surface and the eruptive outer solar atmosphere. The chromosphere harbours and constrains the mass and energy loading processes that define the heating of the corona, the acceleration and the composition of the solar wind, and the energetics and triggering of solar outbursts (filament eruptions, flares, coronal mass ejections) that govern near-Earth space weather and affect mankind’s technological environment.
Small-scale MHD processes play a a pivotal role in defining the intricate fine structure and enormous dynamics of the chromosphere, controlling a reservoir of mass and energy much in excess of what is sent up into the corona. This project targets the intrinsic physics of the chromosphere in order to understand its mass and energy budgets and transfer mechanisms. Elucidating these is a principal quest of solar physics, a necessary step towards better space-weather prediction, and of interest to general astrophysics using the Sun as a close-up Rosetta-Stone star and to plasma physics using the Sun and heliosphere as a nearby laboratory.
Our group is world-leading in modelling the solar atmosphere as one system; from the convection zone where the motions feed energy into the magnetic field and all the way to the corona where the release of magnetic energy is more or less violent. The computational challenge is both in simplifying the complex physics without loosing the main properties and in treating a large enough volume to encompass the large chromospheric structures with enough resolution to capture the dynamics of the system. We have developed a massively parallel code, called Bifrost, to tackle this challenge. The resulting simulations are very time-consuming but crucial for the understanding of the magnetic outer atmosphere of the Sun.
Resource awarded: 21 760 000 core-hours at HERMIT, GCS@HLRS
Protein effects on the structural and optical properties of biological chromophores: Quantum Monte Carlo / Molecular Mechanics calculations on Rhodopsin and Light Harvesting Complexes
Project leader: Leonardo Guidoni, Università degli Studi de L’Aquila, Italy
Collaborators: Guglielmo Mazzola, Ye Luo, Sandro Sorella, SISSA, Italy / Andrea Zen, Daniele Varsano, Daniele Bovi, Gaia Di Paolo, La Sapienza, Università di Roma, Italy / Emanuele Coccia, Matteo Barborini, Università degli Studi de L’Aquila, Italy
Photoactive proteins regulate a wide range of biological processes, from light detection in the eyes to energy conversion in photosynthesis. The mechanistic understanding of the light-driven processes are fundamental for medical and energy research and it has been investigated experimentally by several groups using optical and vibrational spectroscopy techniques. A valuable help in the rationalization is given by the comparison between these data and high level quantum mechanics calculations of these properties based on crystallographic information. The involved chromophores (or pigments) are usually large conjugated organic moieties of about 50-100 atoms absorbing in the visible range. The geometrical features of these molecules, such as the difference between single and double bonds (the so-called bond length alternation) strongly affect their optical properties. Density Functional Theory methods are often not enough accurate to properly evaluate the structural and optical properties of these conjugated systems and the use of correlated quantum chemistry technique is used, although they are limited to small systems. Quantum Monte Carlo (QMC) methods are a promising technique for the study of the electronic structure of correlated molecular systems. QMC algorithms are highly parallel in nature and thanks also to the relatively small memory requirements also for large systems, they have good performances and scalability for highly parallel computers. Furthermore, the good scaling properties of the algorithms with the system size (N3-N4) make QMC methods competitive to other correlated computational chemistry tools for large systems. Recent implementations in the TurboRVB code were able to calculate in an efficient and scalable way the ionic forces, providing us with the possibility to perform full geometry optimization of molecules at the Variational Monte Carlo level. Optimization within an external classical environment is also possible using a Quantum Monte Carlo / Molecular Mechanics (QMC/MM) scheme.
In the present project we plan to perform full Variational Quantum Monte Carlo geometry optimization of two important biological chromophores within their protein environment by QMC/MM: the retinal protonated Schiff base in rhodopsin and the peridinin carotenoid in Peridinin-Chlorophyll Protein, a Light-Harvesting Complex. The latter complexes work as light-collective funnels to improve the efficiency in photosynthetic organisms. Peridinin carotenoids are present in this antenna system in four different non-equivalent conformation, that are supposed to enhance a broadening of the spectral range with an enlargement of the efficiency and of the absorption cross section. On the geometry optimised structure we calculated the absorption energy using Many Body Perturbation Theory. These calculations will allow us to determine the geometrical features of a carotenoid (about 100 atoms) in its natural environment, for the first time using fully ab-initio correlated methods. The geometries and the optical propertied calculated for different non-equivalent conformers of peridinin found in PCP crystal structures will be compared with the aim to dissect the geometrical and the field effects. Our results will contribute to clarify which strategies evolution performed to differentiate the spectral properties of each pigment in order to extend the absorption cross section of antenna systems.
Resource awarded: 180 000 core-hours at CURIE Fat Node partition, GENCI@CEA and 44 564 480 core-hours at JUGENE, GCS@Juelich
Validating QRPA microscopic calculations of radiative strengths for astrophysics
Project leader: Stéphane Hilaire, CEA/DAM/DIF, France
Collaborators: Sophie Péru, Marc Dupuis, CEA, France; Arjan Koning, NRG Petten, The Netherlands; Stéphane Goriely, Marco Martini, Université Libre de Bruxelles, Belgium
One of the major issues in modern astrophysics concerns the analysis of the present composition of the Universe and the explanation of the origin of the different nuclei observed in nature as well as the understanding of the possible processes able to synthesize them. In the description of all the nucleosynthesis processes, nuclear physic inputs always remain a key ingredient. Indeed, thousands of nuclei are involved for which all the nuclear structure and nuclear reaction properties need to be determined. Despite major experimental efforts a strong theoretical activity is therefore necessary to provide the required data for nuclei which will not be produced and studied for long in the laboratory under the conditions encountered in a st ellar environment. For this purpose, many nuclear reaction models have to be used, and since a large amount of data need to be extrapolated far away from experimentally known regions, the microscopic nature of these models is essential. A physically-sound model that is as close as possible to a microscopic description of the nuclear systems is indeed expected to provide the best possible reliability of extrapolations. Much progress has been made recently in the development of microscopic models to provide nuclear structure properties (masses, deformation, nucleon densities) within the mean field approach [1,2] or to determine radiative neutron, proton and alpha-particle capture reaction rates [3,4] as well as the complicated and still very unreliable fission rates . These radiative capture rates strongly depend on the electromagnetic interaction (the photon emission probability) so that to put their description on safer grounds, a great effort must be made to improve the reliability for the prediction of the radiative strength. The photon emission is traditionally estimated within the Lorentzian representation of the giant dipole resonance (GDR), at least for medium- and heavy-mass nuclei. If experimental photoabsorption data confirm such a simple semi-classical shape at energies around the resonance energy, several corrections have been introduced to modify the low-energy behavior of the strength and improve significantly the predictions of the experimental radiation widths and gamma-ray spectra. Microscopic alternatives are also available using random phase approximation (RPA) or Quasi-particle RPA (QRPA) calculations. Such an approach has been developed using the so-called Skyrme interaction but in the spherical approximation only [6,7]. State-of-the-art models can nowadays include improved effective nucleon interactions [8,9], such as the finite range Gogny interaction , without having to resort to the spherical approximation, but it is at the price of time consuming calculation [8,9,10] which makes it difficult to perform systematic studies of that type.
In order to check the accuracy of Gogny-QRPA predictions, a systematic comparison with experimental data is required. We therefore propose here to perform this first required step, before a construction of a complete set of radiative strength functions with all the major multipolarities derived from QRPA calculations based on the Gogny effective nucleon interaction.
Resource awarded: 11 800 000 core-hours at CURIE Fat Node Partition, GENCI@CEA and 13 200 000 core-hours at CURIE Thin Node Partition, GENCI@CEA
Modeling gravitational wave signals from black hole binaries
Project leader: Sascha Husa, Universitat de les Illes Balears, Spain
Collaborators: Sara Gil Casanova, Jordi Burguet Castell, Alex Vaño Viñuales, Denis Pollney, Milton Ruiz, Universitat de les Illes Balears, Spain / Michael Puerrer, University of Vienna , Austria / Mark Hannam, Patricia Schmidt, John Veitch, Ioannis Kamaretsos, B. S. Sathyaprakash, Stephen Fairhurst, Cardiff University, UK / David Hilditch, Sebastiano Bernuzzi, Marcus Thierfelder, Bernd Bruegmann, Nathan Johnson-McDaniel, Friedrich-Schiller-Universität Jena, Germany / Parameswaran Ajith, Christian Reisswig, California Institute of Technology, USA
One century after Einstein’s theory of general relativity has revealed space and time as dynamical entities, research in gravitation is preparing for a dramatically rising tide of observational data. The first detection of gravitational waves (GW) will push open a new window on the universe, comparable to the revolution brought about by the development of radio astronomy. Prime candidates for first detection are catastrophic events involving compact relativistic objects and their strongly nonlinear gravitational fields, in particular the coalescence of compact binaries of black holes (BH).
In this project, we model these events and their gravitational wave signatures by solving the nonlinear Einstein equations. Our results will help to identify the first such signals to be observed by advanced gravitational wave detectors, and contribute to answer important open questions in astrophysics and fundamental physics, where BHs have taken center stage.
The challenge to meet the tremendous sensitivity requirements of gravitational wave detectors is paralleled by a computational modeling challenge: The detection, identification, and accurate determination of the physical parameters of sources relies on the availability of waveform template banks, which are used to filter the detector signals. For some sources, such as the slow inspiral of widely separated BHs, good analytical approximations for the gravitational waveforms are provided by perturbative post-Newtonian expansion techniques. For the last obits and merger, where the fields are particularly strong, and where one has the best chances of discovering entirely new physics, the full Einstein equations have to be solved numerically. This is what we do.
Over the last few years, we have developed the techniques that render possible large scale parameter studies of black hole binaries, and to construct from such simulations analytical template banks that describe the inspiral, merger and ringdown of black hole binaries. This project will make it possible to combine all these techniques and leap-frog black hole parameter studies in time for establishing data analysis strategies for the advanced GW detectors that will come online in 2014. With the aim of establishing a model for GW searches and source identification (parameter estimation) that will have maximal effect with largest simplicity, we have previously pioneered the construction of analytical template banks which “interpolate” numerical simulations, and which are already used for analysing the data of the LIGO and Virgo detectors. So far, our template banks only span the 3-dimensional parameter space of non-precessing spinning binaries, the accuracy of our simulations and parameter space models would still limit the identification of source parameters, and our data set is very sparse as the mass ratio increases and simulations become more expensive. Our studies suggest however that our coverage of parameter space can be extended significantly by adding precessing binaries, where only the heavier black hole is spinning. Accordingly, the core sets of simulations will be of non-precessing binaries parameterized by a suitably chosen total spin parameter for mass ratios of 1:5 and larger (completing our previous work with more challenging simulations), and of precessing binaries where only one BH is spinning.
Resource awarded: 2 000 000 core-hours at CURIE Fat Node Partition, GENCI@CEA and 14 700 000 core-hours at HERMIT, GCS@HLRS
Singlet physics – the missing link to precision lattice QCD
Project leader: Karl Jansen, NIC, DESY Zeuthen, Germany
Collaborators: Constantia Alexandrou, Martha Constantinou, University of Cyprus, Cyprus / Jaume Carbonell, Laboratoire de Physique Subatomique et Cosmologie, France / Petros Dimopoulos, Roberto Frezzotti, Giancarlo Rossi, Universita di Roma Tor Vergata and INFN, Italy / Simon Dinter, Vincent Drach, Andreas Nube, NIC, DESY Zeuthen, Germany /Gregorio Herdoiza, Universidad Autonoma de Madrid, Spain / Gianis Koutsou, The Cyprus Institute, Cyprus / Christopher Michael, University of Liverpool, UK / Konstantin Ottnad, Carsten Urbach, Universitat Bonn, Germany /Marcus Petschlies, Marc Wagner, Humboldt Universitat zu Berlin, Germany
Lattice QCD simulations are reaching today physical values of the pion mass, small lattice spacings and large volumes. In addition, modern supercomputer architectures allow to obtain a high number of gauge field configurations. Thus, many physical results are computed nowadays with a very good statistical accuracy and well controlled systematic errors. This clearly opens a new era of lattice QCD simulations, namely precision physics for many physical quantities. Such a perspective is extremely exciting since it will not only lead to a better understanding of Quantum chromodynamics itself as our theory of strong interactions but will also open the way to discover new physics beyond the standard model.
However, there is one aspect of lattice QCD simulations which is often not addressed in such calculations and which is nevertheless indispensable for really claiming a full control of the errors of lattice results: these are the singlet or disconnected contributions which appear in a number of important physical quantities. Of course, there is a good reason, why such contributions are often not computed at all, namely, the fact that disconnected graphs are very noisy and extremely hard to calculate. With this project we want to propose to tackle the problem of singlet contributions in a systematic and comprehensive way. In particular, we want to compute the disconnected contributions for a number of important physical observables as listed below. In the past, we have already computed examples for quantities where disconnected graphs play an important role and we could demonstrate that it is possible to calculate these quantities with a sufficient precision and to determine them on a quantitative level. The key why we believe that we can address such disconnected graphs in a controlled and quantitative way is the particular formulation of lattice fermions we are using, namely maximally twisted Wilson quarks which allow for special techniques in the computations of disconnected graphs and lead to a significantly reduced error.
Nevertheless even with such improved techniques which are described in detail in the proposal, a substantial statistics is still needed to obtain results with the desired accuracy. Thus for carrying through the project, clearly a tier-0 center is required. However, we believe that it is absolutely mandatory to have results for the disconnected graphs if lattice QCD is going to provide final and precise answers for many quantities of interest.
Resource awarded: 19 000 000 core-hours at JUGENE, GCS@Juelich
Thermal Dilepton Rates and Electrical Conductivity in the Quark Gluon plasma
Project leader: Olaf Kaczmarek, University of Bielefeld, Germany
Collaborators: Frithjof Karsch, Anthony Francis, Marcel Müller, Edwin Laermann, University of Bielefeld, Germany / Wolfgang Söldne, University of Regensburg, Germany / Heng Tong Ding, Brookhaven National Laboratory, USA / Hendrik van Hees, Justus-Liebig Universität Giessen, Germany / Ralf Rapp, Texas A&M University, USA
Hadronic correlation functions are important tools for the analysis of strongly interacting matter at high temperatures produced in current heavy ion experiments at RHIC and LHC and in the future at FAIR. Vector spectral functions are directly related to dilepton and photon rates measured in such experiments and transport coefficients derived from correlation functions are important ingredients for the study of the evolution of the produced medium.
The proposed Tier-0 resources will allow us to obtain reliable results for the vector correlation function, the vector spectral function and the electrical conductivity in the continuum limit.
By analyzing different temperatures relevant for heavy ion experiments we will obtain a profound knowledge of the relevant contributions to the spectrum of dileptons and photons.
Possible contributions from the rho-meson resonance will be analyzed in detail. This will shed light into the large enhancement of the dilepton yields in the low invariant mass region observed at RHIC and will allow to estimate those contributions at LHC.
Resource awarded: 33 167 520 core-hours at JUGENE, GCS@Juelich
Gyrokinetic Large Eddy Simulations
Project leader: Bernard Knaepen, Université Libre de Bruxelles, Belgium
Collaborators: Alejandro Banon, Université Libre de Bruxelles, Belgium
Understanding the transport mechanisms that take place in tokamaks remains one of the key challenges for nuclear fusion devices. In particular, the theoretical description of the anomalous transport due to the growth of instabilities and the appearance of microturbulence is far from being fully satisfactory, despite that fairly well established kinetic equations have been derived to describe these phenomena. The emergence of massively parallel computers during the last two decades and the development of kinetic and, more specifically, gyrokinetic solvers has however opened the road to more and more accurate simulations of charged particle trajectories in realistic conditions. Nevertheless, these simulations remain extremely expensive for several reasons. The velocity probability distribution is a quantity defined in a five-dimensional phase space (three for the spatial domain and two for the velocity variables, after the elimination of the rapid gyro-motion around the magnetic field line). Moreover, the resolution needed to capture all the physically relevant phenomena may be very high. Standard ion-scale simulations are typically performed with 128 x 64 x 16 grid points in the x-y-z space and 32 x 8 to 64 x 16 points (or more) in the velocity space. Also, differences in time scales characterizing various phenomena impose that converged simulations must use a very large number of time steps. This very high computational cost motivates the present project.
The main idea is to use a much lower resolution than the one imposed by the physics of the problem studied using the kinetic codes. This can be achieved by filtering out the smallest scales of motion and by modeling their effects on the larger scales. In most complex systems, such a scale separation is not justified directly by the physics of the problem, but is rather motivated by the computational constraints. The main objective of this filtering strategy is then to reduce the complexity of the fields and probability distributions that have to be computed. However, this scale separation has also a theoretical motivation. The largest scales are usually very much dependent on the experimental conditions (boundary conditions, geometry, external constraints…). Their modeling is thus expected to be problem-dependent, which means that new models, or at least new model parameters, have to be chosen when a new situation is considered. On the contrary, small scales are supposed to have a more universal behavior. They are thus expected to be describable by models that could be “almost problem independent” and supported by theoretical approaches.
First developed in the framework of fluid turbulence, this scale separation strategy is usually referred to as Large Eddy Simulations (LES). The main objective of the present project is to explore the possible advantages of transposing LES methods to kinetic theories. The most significant potential for savings is definitively in filtering in the physical space r. The resulting gyrokinetic large eddy simulations (GLES) would be of course much less expensive than the original simulation but will require the development and the implementation of models for taking into account the effect of the filtered scales.
Resource awarded: 4 500 000 core-hours at JUGENE, GCS@Juelich
Effects of irradiation on nanostructures from first principles simulations
Project leader: Arkady Krasheninnikov, University of Helsinki, Finland
Collaborators: Jussi Enkovaara, CSC, Finland / Jani Kotakoski, University of Helsinki, Finland / Ari Ojanperä, Aalto University, Finland
Recent experiments on ion and electron bombardment of nanostructures demonstrate that irradiation can have beneficial effects on such targets and that electron or ion beams can serve as tools to change the morphology and tailor mechanical, electronic and even magnetic properties of various nanostructured materials. It is also evident from the data obtained so far that the conventional theory of defect production in bulk materials does not work at the nanoscale, or it requires considerable modifications. In this project we will use first principles atomistic computer simulations to model interaction of energetic ions and electrons with nanostructures. Such an approach assumes that the only information on the target available is the types of atoms it consists of, then a quantum mechanical approach tor the coupled electron-ion system is used to calculate the material structure, properties and time evolution. Although several approximations are used and many-body problem is considerably simplified, this approach makes it possible to simulate the behavior of the system with a high accuracy (which is achieved at high computation cost, though) and without any material-specific adjustable parameter. In practice, by using time-dependent and static density functional theory implemented in a computer code GPAW, we will simulate impacts of high-energy ions onto nanostructures with the aim of estimating the amount of kinetic energy of the projectile deposited into the target and the number of defects produced by the impact. With regard to the systems, we focus our research on very important nanostructures such as graphene and boron nitride sheets, which have similar atomic structure but different electronic properties (semi-metal and wide-gap semiconductor). This should make it possible to understand from first principles the deposition of projectile kinetic energy into electronic degrees of freedom for systems with different electronic structures and its conversion into defects. We will also simulate interaction of energetic electrons with these nanostructures and calculate probabilities for defect production and for irradiation-induced transformations. Armed with the knowledge on basic transformations, we then model evolution of graphene with brain boundaries under electron irradiation. We will also calculate the electronic structure of graphene sheets with grain boundaries and other defects. Based on the knowledge obtained for graphene and boron-nitride sheets, we will extrapolate our findings to other nanolsystems and assess irradiation as a means for tailoring the electronic properties of nanostructures.
Resource awarded: 10 000 000 core-hours at CURIE Thin Node partition, GENCI@CEA
On the stability of ordinary matter and related issues
Project leader: Laurent Lellouch, CNRS (Institut de Physique) and Univ. Aix-Marseille II, France
Collaborators: Alberto Ramos, Antonin Portelli, Julien Frison, CNRS (Institut de Physique) and Univ. Aix-Marseille II, France / Zoltan Fodor, Christian Hoelbling, Stefan Krieg, Kalman Szabo, Stephan Dürr, Bergische Universtität Wuppertal, Germany
Ordinary matter is composed of electrons and of up, down and, through quantum fluctuations, strange quarks. The up and down quarks combine to make much more massive protons and neutrons, which in turn bind together to form the nuclei of all atoms. These nuclei carry positive electric charge and bind electrons, which are negatively charged, to form neutral atoms, the constituents of ordinary matter. The whole stability of this edifice relies on the fact that neutrons are slightly more massive than protons—by only 0.14 percent, in fact. If the reverse were true, protons would eventually decay radioactively into neutrons, and atoms would not form. This tiny mass difference is believed to be the result of two competing effects. On the one hand, the mass of the electrically charged proton is augmented, with respect to that of the neutral neutron, by the energy carried in the electric field surrounding it. On the other hand, the mass of the neutron is enhanced because the sum of the masses of its constituents (one up and two down quarks) is larger than it is for the proton (composed of one down and two up quarks).
We propose to incorporate these two small but important effects into the theoretical description of the interactions of up, down and strange quarks. The difficulty with computing in this theory stems from the fact that quarks interact in a highly nonlinear fashion, so much so that they cannot be isolated and are confined within particles, such as the proton and neutron, known as hadrons. The only known way to account for these nonlinear interactions systematically is to solve numerically the very complex equations of the theory of the strong interaction, Quantum Chromodynamics (QCD), within a computational framework known as lattice QCD.
Even though electromagnetic effects and effects related to the difference in up and down quark masses are small, they are convoluted with the complex interactions of the quarks confined within the hadrons. Thus, the most effective way to compute them systematically is to incorporate them directly into large scale numerical lattice QCD computations. Using this approach, we will calculate the consequences of these effects on the masses of protons, neutrons and other hadrons. This will allow us to determine in detail how the two competing effects conspire to give the observed proton-neutron mass difference. We will further use these techniques to determine the masses of the up and down quarks, as well as other important quantities. Ultimately, such a tool will make it possible to describe, ab initio, all of the low energy physics of hadrons and, eventually, of atomic nuclei.
Resource awarded: 55 000 000 core-hours at JUGENE, GCS@Juelich
Pulsation: Peta scale mULti-gridS ocean-ATmosphere coupled simulatIONs
Project leader: Sebastien Masson, Univ. Pierre et Marie Curie, France
Collaborators: Rachid Benshila, CNRS, France / Eric Maisonnave, CERFACS, France / Marie-Alice Foujols, IPSL, France / Arnaud Caubel, Yann Meurdesoif, CE A, France / Xavier Vigouroux, Cyril Mazauric, Bull, France / Christophe Hourdin, MMHN, France / Vincent Echevin, Francois Colas, IRD, France / Laurent Debreu, INRIA, France
Climate modelling has become one of the major technical and scientific challenges of the century. One of the major caveats of climate simulations, which consist of coupling global ocean and atmospheric models, is the limitation in spatial resolution ( 100 km) imposed by the high computing cost. This constrain greatly limits the realism of the physical processes parameterized in the model. Small-scale processes can indeed play a key role in the variability of the climate at the global scale through the intrinsic nonlinearity of the system and the positive feedbacks associated with the ocean-atmosphere interactions. It is then essential to identify and quantify these mechanisms, referred here as “upscaling” processes, by which small-scale localized errors have a knock-on effect onto global climate.
PRACE supercomputers represent an inestimable opportunity to the climate modeling community to reduce recurrent biases and limit uncertainties in climate simulations and long-term climate change projection. In this project, we propose to take up this scientific challenge and explore new pathways toward a better representation of the multi-scale physics that drive climate variability. Our efforts will concentrate on key upscaling processes taking place in coastal areas characterized by cold surface waters (upwelling), which hold the models strongest biases in the Tropics at local but also at basin scales. We will focus on two major coastal upwelling regions (the Arabian Sea and the southeastern Pacific) that have great societal impacts, but differ in their characteristics and impacts on climate (El Niño and Monsoon).
Our approach aims at benefiting from the best of the global and regional modeling approaches with the creation of the first multi-scale ocean-atmosphere coupled modeling platform. Our goal is to introduce embedded high–resolution oceanic and atmospheric zooms in key regions of a global climate model. By following this strategy, based on a 2-way nesting procedure, we will be able to represent major fine-scale oceanic and atmospheric dynamical processes in crucial areas, and allow these regional processes to feedback on the climate at global scale. To attain this goal, we will combine state-of-the-art and popular models: NEMO for the ocean, WRF for the atmosphere and OASIS for the coupler. WRF and NEMO are among the very few models able to combine high-resolution simulations at global scale and 2-way embedded zooms functionality.
Our methodology is based on a set of NEMO-WRF simulations in a tropical channel configuration with a horizontal resolution of 27km that will differ by the integration of 9km zooms in the ocean or the atmosphere. The comparison of the experiences with and without zooms will allow us to understand the role of upscaling processes and quantify their impact on the large scale tropical climate phenomena such (El Niño, the Indian monsoon…) and recurrent models bias. The completion of this first multi-scale ocean-atmosphere coupled modeling platform will allow us to explore the impact of the highest spatial resolution not yet approachable by the current climate models. It offers a unique opportunity to significantly improve the next generation of climate simulations.
Resource awarded: 4 000 000 core-hours at CURIE Fat Node Partition, GENCI@CEA
Blood Dynamics in heart-sized coronary arteries
Project leader: Simone Melchionna, Sapienza, Consiglio Nazionale delle Ricerche, Italy
Collaborators: Massimo Bernaschi, Mauro Bisson, Sauro Succi, Consiglio Nazionale delle Ricerche, Italy
The goal of the present project is to study blood circulation in a global human coronary system to produce the first systemic view of cardiovascular flow, starting from 10 micrometer size up to the full-heart size of centimeters. Owing to the global nature of the pressure field in extended coronary systems, the project will generate the map of blood starting from its basic constituents, red blood cells and plasma.
The project has two main goals: understanding the physical basis of blood circulation from a bottom-up viewpoint and integrating such knowledge in the understanding the causes of a common cardiovascular disease. Understanding the physical basis of blood circulation has the objective of looking at how blood organizes itself in irregular geometries and near the endothelium of arteries. We will be looking at phenomena such as: shear-enhanced diffusion and its connection with the network Farhaeus-Lindqvist effect, and the distribution of stress at moderate and high Reynolds number in relation to hemolysis and turbulence suppression.
From the medical viewpoint, the causes and evolution of atherosclerotic plaques in the heart arteries, in particular regarding how, where and when localized regions of small Endothelial Shear Stress (ESS) appear during the heart cycle, correspond to stagnation regions that carry maximal risk for plaque formation. The granular nature of blood in proximity of the endothelium induces strong modulations in the ESS distribution.
Both these goals require handling the full, bottom-up description of blood, ranging from microns to centimeter scales. Our modeling of blood is based on a two-level description, entailing the plasma as a continuum medium together with the corpuscular presence of red blood cells, overall giving rise to complex blood microrheology. We employ suspended particles that induce a feedback on the local blood flow and where hydrodynamic forces induce non-trivial long-range correlations between tracers. In order to capture the basic features of the coronary system, a pulsatile inflow is employed, with well-known injection signal from the aorta region, and the associated flow reproduced for a few heartbeats.
Resource awarded: 50 000 000 core-hours at JUGENE, GCS@Juelich
Branch point motion in star polymers and their mixtures with linear chains
Project leader: Angel Moreno, CSIC-UPV/EHU, Spain
Collaborators: Petra Bacova, Juan Colmenero, UPV/EHU, Spain
Tube models are the basis for the theoretical calculation of the stress relaxation, responsible for the viscoelastic properties, in melts of long entangled polymers [McLeish 2002]. The reptation scenario for linear chains is dramatically modified by the introduction of branch points (e.g., in star polymers). Indeed branch points cannot reptate since no common tube exists for the arms. Instead of the power-law dependence for linear chains, viscosities and diffusivities depend exponentially on the arm length. It has been proposed that branch points perform hopping-like diffusive steps at times for which one arm retracts to the star center, poking out in a new direction in the entanglement network. Arm retraction involves an enormous entropic loss, leading to the former exponential laws. A mechanism of dynamic tube dilution has been proposed for arm retraction, in which inner segments are not hindered by entanglements with outer segments that relax at much earlier times. This picture described stress relaxation in symmetric stars, but failures were soon revealed. Thus, the diffusivity seems mo re compatible with the undiluted than with the diluted tube diameter. The model was extended to asymmetric stars but experiments revealed that weakly entangled short arms produce a much stronger drag on the long backbone than expected.It is generally believed that branch point motion is poorly understood, probably being the primary source of disagreement between theory and experiments. Indeed there are very few works in the literature observing directly the branch point motion [Zhou 2007, Zamponi 2010]. It is not clear what is the prefactor relating the diffusivity with the tube diameter and the arm relaxation time. Thus, the analysis of experiments in asymmetric stars provides very small unphysical values. It is possible that the highly constrained motion of the branch point leads to non- gaussian character of the van Hove function, with important implications in the interpretation of neutron scattering results. It is not clear how mixing of stars with linear chains affects hopping of the branch points. Finally, the increasing interest in branched polymers of arbitrary architectures for technological applications demands the development of predictive tools [Wang 2010, Masubuchi 2011] for their rheological properties and flow behaviour under processing conditions. Again, these tools are based on model assumptions for hierarchical relaxation of the arms and branch point motion, without support by direct observations from experiments or simulations.
With this proposal we aim to shed light on the former questions by providing a detailed and systematic characterization of branch point motion, a feature that has been very scarcely investigated and that is a fundamental ingredient for the development of tube-based models for branched polymers. Without assumptions on transport mechanisms, and by retaining the basic ingredients of excluded volume and chain connectivity, we will perform molecular dynamics simulations in bead-spring melts (Grest-Kremer model) of star polymers and star-linear mixtures. Supercomputer facilities are necessary for a systematic study as a function of several control parameters (length of short and long arms in asymmetric stars, composition of star-linear mixtures…). This is due to both, the exponentially long relaxation times of the star arms and the involved large molecular weights of entangled polymers, requiring large simulation boxes. Fortunately, well established algorithms [Auhl 2003] allow to equilibrate chain conformations with low computational effort. This is performed in our institution, leaving supercomputation in HERMIT for the highly-demanding production runs. We plan to simulate 8 systems (8 MD runs) corresponding to different values of mixture compositions and arm lengths. Each run will cover up to about 5×10^9 MD steps in simulation boxes of about 100000 particles. The qualitative chain relaxation time scales covered by the simulations are of about 0.1 milliseconds, which are relevant experimental scales in entangled polymer systems.
Resource awarded: 3 000 000 core-hours at CURIE Fat Node partition, GENCI@CEA
Exploring unconventional order in a quantum spin liquid via large scale quantum Monte Carlo simulations
Project leader: Alejandro Muramatsu, University of Stuttgart, Germany
Collaborators: Fakher Assaad, Martin Bercx, University of Würzburg, Germany / Stefan Wessel, Thomas Lang, RWTH Aachen University, Germany / Sylvain Capponi, Xavier Plat, Toulouse University, France / Karim Bouadim, Zi Yang. Meng, Alexander Moreno, University of Stuttgart, Germany
Among the newly discovered exotic quantum states of matter, like in the high temperature superconducting cuprates and pnictides, the fractional quantum Hall states, supersolidity, topological insulators, or frustrated magnets, quantum spin liquids occupy a prominent place. This fact is due, on the one hand, to their ubiquity in the theoretical description of high temperature superconductors and frustrated quantum magnets, and, on the other hand, to their elusiveness once models proposed to capture such phases are subjected to closer examination. Here, we will focus on possible unconventional types of order in a quantum spin-liquid phase that was recently found by our collaboration for electrons on a two dimensional honeycomb lattice, the structure of graphene, within an intermediately correlated regime at an average density of one electron per site. Our quantum Monte Carlo simulations showed that this unexpected spin-liquid emerges between a state described by massless Dirac fermions and an antiferromagnetically ordered Mott insulator. Moreover, we found that this quantum-disordered state is a resonating valence-bond (RVB) liquid, akin to the one proposed for high temperature superconductors. This was in fact the first unbiased determination of a RVB-liquid in an electronic system. It is expected that such exotic states of matter lead to unconventional order like topological order, where the ground-state displays a degeneracy not due to spontaneous symmetry breaking, but due to states in different topological sectors. The search for topological order constitutes a challenging enterprise since its unambiguous detection requires simulations of systems with linear sizes larger than the correlation length of magnetic excitations, that in our case reach around 40 unit cells. Our simulations, that need a sizeable memory per node, were carried out until now on supercomputers (JUROPA at NIC) with allocations through national computer centres. Such allocations allowed us to reach linear sizes of 18 unit cells, well below the estimated correlation length in the spin-liquid phase. In order to reach the relevant length scale, simulations require at least an order of magnitude more in CPU time than the available contingents in national computer centres, making it necessary to run them on a Tier-0 facility. Simulations reaching such ranges of systems sizes provide the possibility of uncovering for the first time topological properties of a quantum spin-liquid in an electronic system, and hence, close to experimentally accessible set-ups.
While the direct benefits of such simulations will mainly be within condensed matter physics, we expect them to be of a broader interest for fundamental physics, since the underlying low energy physics pertains to that of strongly interacting relativistic massless fermions. The identification of topological quantum states is also of interest in the field of quantum information, where the search for such states is currently very active, since they offer the possibility to defy decoherence in applications related to quantum computing.
Resource awarded: 26 000 000 core-hours at HERMIT, GCS@HLRS
Large-eddy simulations of stratified atmospheric turbulent flows with Meso-NH: application to safety in meteorology and environmental impact of aviation
Project leader: Roberto Paoli, CERFACS, France
Collaborators: Juan Escobar Muñoz, Laboratoire d’Aérologie, France / Alexandre Paci, Jeanne Colin, Thierry Bergot, Valéry Masson, Odile Thouron, CNRM-GAME (URA1357 METEO-FRANCE and CNRS), France / Daniel Cariolle, CERFACS, France
The objective of this project is to exploit massively parallel computing to study some fundamental and not fully understood aspects of turbulent stratified flows that are relevant to aviati on: (i) the structure of the turbulent energy cascade in the UTLS (upper troposphere-lower stratosphere) at intermediate scales ranging from O(1Km) to O(1m); (ii) the full characterization of mountain wave turbulence generated in a large stratified water flume; and (iii) the mechanisms controlling the life cycle of fog in the ABL (atmospheric boundary layer). The computational challenge is represented by the high spatial resolution that is needed to capture the turbulent structures within a large inertial range of homogeneous anisotropic turbulence and in a fully developed turbulent boundary layer. The first two problems are purely dynamical processes, in the third problem; microphysics and the radiative transfer models are coupled to the dynamical core of the model. We target the convergence of turbulence statistics on computational domains of 1 to 10 billion points depending on the problem.
The code selected for this project is Meso-NH (http:/
/), the non-hydrostatic mesoscale atmospheric model of the French research community that is currently used by various centers in Europe for different geophysical applications. The model is jointly developed by the Laboratoire d’Aérologie (UMR 5560 OMP/CNRS & UPS Toulouse) and by CNRM-GAME(URA 1357 CNRS/Météo-France). mesonh.aero.obs-mip.fr/ mesonh/
Resource awarded: 500 000 core-hours at CURIE Fat Node Partition, GENCI@CEA and 21 000 000 core-hours at CURIE Thin Node Partition, GENCI@CEA
Ligth quark mass dependence of two-hadron energies in Lattice QCD
Project leader: Assumpta Parreño, University of Barcelona, Spain
Collaborators: Martin John Savage, Huey-Wen Lin, University of Washington, USA / Silas Beane, University of New Hampshire, USA / William Detmold, College of William and Mary / Jefferson Laboratory, USA / Kostas Orginos, University of Barcelona, Spain / Emmanuel Chang, Thomas Luu, Lawrence Livermore National Laboratory, USA / Aaron Torok, University of Indiana, USA / André Walker-Loud, Lawrence Berkeley National Lab, USA
A central goal of Nuclear Physics is to obtain a first principles description of the properties and interactions of nuclei from the underlying theory of strong and electroweak interactions. At the theoretical level, the strong interaction is understood in terms of the theory of Quantum Chromodynamics (QCD) that governs the interactions between quarks and gluons. While QCD was developed in the 1970s, it has proven notoriously difficult to describe the low energy hadronic systems important to nuclear physics in terms of quarks and gluons. At short-distances (high energies), the coupling between the quarks and gluons, and between the gluons and themselves is small, and allows for processes to be computed as an asymptotic series in the strong coupling constant. Direct confrontation of experimental measurements and perturbative QCD calculations in the high-energy regime has led to almost universal acceptance of QCD as the theory of the strong interaction. However, as the typical length scale of the process grows, the QCD coupling becomes large, and perturbative calculations fail to converge. At long distances, and hence low energies, the appropriate degrees of freedom are not the quarks and gluons that the QCD action is constructed from, but hadrons, such as pions and nucleons. It is the properties of collections of nucleons, mesons and hyperons that define the core of nuclear physics, a field that, simply put, is the exploration of the non-perturbative regime of QCD. Presently, the only known way to solve QCD in this low-energy regime is numerically using Lattice QCD (LQCD) and the goal of this proposal is to apply this technique to fundamental calculations for nuclear physics.
Lattice QCD is a non-perturbative formulation of QCD in which a specific volume of Euclidean space-time is discretized to allow for numerical evaluation of the path integral. Quarks reside on the sites of the space-time lattice and gluon fields reside on the links between the sites. Monte- Carlo techniques are used to perform the integrals over the fields that define the theory. In LQCD, the various approximations that are made for computational necessity are systematically improvable and the approach provides a controlled realization of QCD. Effective field theories that take into account the discretization scale, allow the requisite extrapolations to be performed with quantifiable uncertainties.
An important example of how LQCD calculations can impact nuclear physics is in the determination of the behavior of hadronic matter at densities away from that of nuclear matter, such as those that occur in the interior of neutron stars. The main theoretical uncertainty in establishing the composition of hadronic matter at the center of these systems, and hence the equation of state, is the interactions between the strange hadrons, such as the kaon and hyperons, and the protons and neutrons. These interactions are poorly known experimentally due to the short lifetime of the strange hadrons. The central goal of our proposal is to reliably calculate these interactions from the underlying quantum field theory of the strong interactions, QCD.
Resource awarded: 30 000 000 core-hours at CURIE Thin Node partition, GENCI@CEA
Structure and evolution of an active region on the Sun
Project leader: Hardi Peter, Max-Planck-Institut für Sonnensystemforschung, Germany
Collaborators: Philippe Bourdin, Sven Bingert, Tijmen van Wettum, Max-Planck-Institut für Sonnensystemforschung, Germany
Cool stars are surrounded by hot coronae, which are heated to some million degrees Kelvin. The heating processes, widely proposed to be related to the stellar magnetic field, lead not only to temperatures in the outer atmosphere well in excess of the stellar surface, but result also in a highly dynamic response of the plasma, inducing flows and waves. To study these processes the Sun is of pivotal interest because here we can spatially resolve individual structures in the corona.
Utilizing increasing computing power, we plan to run new numerical experiments which will allow to study the structure and evolution of an active region in the solar corona. To this end we describe part of the solar corona, i.e. an active region, in a box using magneto-hydrodynamics. The system is driven by fluid motions on the solar surface driven by convection which carry around the magnetic fieldlines. This leads to currents in the upper layers which are dissipated, subsequently heat the atmosphere and thus create the corona.
So far this modelling has been possible only in simplified setups. The proposed simulation will allow for the first time to model the structure and evolution of a solar active region with a high spatial resolution (230 km) to resolve most of the driving motions on the surface and at the same time to describe the full extend of the active region (235×235 Mm).
The results from these simulations will be compared to real observations of the Sun by deriving synthesised observations, i.e. emission line spectra, from the numerical experiments (which is not part of this proposal, as it can be done on a “normal” computer).
Resource awarded: 3 300 000 core-hours at CURIE Thin Node partition, GENCI@CEA and 3 300 000 core-hours at CURIE Fat Node partition, GENCI@CEA
Multicenter cobalt-oxo cores for catalytic water oxidation
Project leader: Simone Piccinin, CNR-IOM SISSA, Italy
Collaborators: Stefano Fabris, CNR-IOM SISSA, Italy
Replacing fossil fuels with renewable energy sources like solar and wind requires the development of novel strategies for energy storage, to cope with their diurnal variability. The production of fuels, e.g. hydrogen, using renewable energy is a possible way to achieve this goal. A serious bottleneck toward the development of artificial photosynthetic devices is the lack of efficient, stable and cheap catalysts to perform one of the key steps of this electrochemical process, namely the oxidation of water to molecular oxygen: 2H2O -> O2 + 4H+ + 4e-. A new class of inorganic compunds, based on multicenter metal-oxo cores capped by polyoxometalate ligands, have been recently shown to efficiently catalyze this reaction without any deactivation. Particularly attractive are the cobalt-oxo complexes, since they are based on earth-abundant elements and parallel the catalytic efficiency of the best materials based on precious metal oxides. Here we propose to perform large scale density functional theory (DFT) simulations to investigate the mechanism of reaction, the thermodynamics of the catalytic cycle, and to highlight the correlations between mechanism of reaction and the local structure of the active sites. We propose to indentify common descriptors for the catalytic activity of this class of materials, using the computed thermodynamics of the reaction cycle. In particular, we will identify the overpotential-determining reaction step, compare the theoretical predictions for this system with those obtained for other metal-oxo cores and establish the accuracy of these indicators for this class of catalysts. The oxygen-oxygen bond formation, generally assumed to be the rate determining step in this reaction, will be studied using metadynamics simulations. These simulations will be performed in solution, explicitly describing the water molecules at the DFT level, since they participate in the reaction cycle. Hybrid functionals for exchange and correlation are necessary to properly describe the change of oxidation state of Co, making these large scale simulations extremely demanding and thus requiring the large computational infrastructure provided by PRACE.
Resource awarded: 4 816 000 core-hours at CURIE Fat Node Partition, GENCI@CEA
Structure of turbulence in supersonic boundary layers at high Reynolds number
Project leader: Sergio Pirozzoli, Sapienza, University of Rome, Italy
Collaborators: Matteo Bernardini, Sapienza, University of Rome, Italy
The proposed research aims at establishing the effect of high Reynolds numbers on the structure of supersonic turbulent boundary layers through Direct Numerical Simulation of the compressible Navier-Stokes equations. We aim at probing a range of Reynolds numbers so far only accessed by experiments, and out of reach from simulations owing to the huge computational resources involved. The added value over experiments will be that DNS gives direct access to the full range of scales of the turbulent motion without any uncertainty related to probe size, and especially to the near-wall region, which is critical for the mechanisms of turbulence self-sustainment. The flow parameters selected for the study correspond to free-stream Mach number M=2, and Reynolds number based on the wall friction Re τ= 4000, which is well beyond the range explored by DNS so far. The choice of supersonic flow conditions will not affect the physical findings of the project, since under these conditions the boundary layer structure has essentially incompressible nature. The simulation will be performed by means of an in-house finite-difference code, which has the nice feature of being both high-order accurate (6th order), and more important capable to discretely preserve kinetic energy in the inviscid, time-continuous limit. Results of simulations at lower Reynolds number (still at the edge of current computational capabilities) have already been published in major journals, which makes the proposed research, while extremely challenging from a computational standpoint, highly likely to be feasible with the granted resources. We expect that the research will have significant impact on our understanding of the physics of boundary layers at high Reynolds number, which are expected to yield differences with respect to their low-Re counterparts, since large-scale outer-layer structures are expected to become dynamically significant, imparting significant influence (under the form of imprint and modulation) on the near-wall layer. We also expect that the database resulting from the project, which will be made publicly available on a web-site, may have a significant impact for the calibration of turbulence models in the supersonic regime.
Resource awarded: 50 000 000 core-hours at JUGENE, GCS@Juelich
First principles design of a biocatalyst for water oxidation
Project leader: Carme Rovira, University of Barcelona, Spain
Collaborators: Agustí LLedós, Pietro Vidossich, David Balcells, Universitat Autònoma de Barcelona, Spain / Víctor Rojas, Parc Científic de Barcelona, Spain
The development of an artificial photosynthetic system to produce fuel, thus storing solar into chemical energy, is a major objective in current research. One of the major challenges in this field is the efficient catalytic oxidation of water to dioxygen, with current catalysts showing limited reaction rates and stability. It is the objective of the present research proposal to design a biocatalyst capable of performing water oxidation efficiently, overcoming the limitations in terms of toxicity and cost of current (non-biological) catalysts and thus suitable for its use on the large scale. Specifically, by investigating structure-function relationships in selected heme enzymes we will design an optimal active site structure for water oxidation. Our approach, supported by recent experimental evidence, is based on the use of first principles calculations (Car-Parrinello molecular dynamics, within the QM/MM approach, combined with the string path-finding algorithm) to compare the reaction pathways of selected heme enzymes, elucidating the minimum structural features that the biocatalys should have to achieve oxidation of water and iteratively constructing an optimum active site. Our previous experience on the mechanistic details of organometallic catalysts, the reactivity of peroxo species in heme enzymes and the use of ab initio molecular dynamics provides us with a good starting point to initiate this project, which might have an impact in the search of sustainable solutions for the the high demand of energy our society is facing.
Resource awarded: 37 500 000 core-hours at CURIE Thin Node pa rtition, GENCI@CEA
Meteorites on the Computer
Project leader: Laurent Soulard, Commissariat a l’Energie Atomique et aux Energies Alternatives, France
Collaborators: Jean Clerouin, Nicolas Pineau, Muriel Delaveau, Commissariat a l’Energie Atomique et aux Energies Alternatives, France / Philippe Gillet, Jean-Francois Molinari, Ecole Polytechnique Federale de Lausanne, Switzerland
Meteorites analysis provides a very interesting way to explore planets history. Shocked meteorites, depending upon their origin, is a unique source of information for understanding the dynamics of impacts at different stages of the evolution of the solar system. Nevertheless the internal structure of these meteorites is still unclear because it results on the propagation of successive strong shocks and release waves in a complex material. For instance, there are evidences that some meteorites arriving on earth are in fact ejected from the Martian surface by a strong shock. A careful analysis of these meteorites reveals that some of the Martian atmosphere remains trapped in the melted pockets created by the shock. This mechanism is still poorly understood because very complex interaction mechanisms of shock with internal defects (craks, voids and other inhomogeneities) are involved. An other example is the observation of unexpected small-sized high-pressure minerals or organic assemblages (stishovite, diamond, unknown carbon phases.) providing an estimation of the shock pressure and temperature within the veins.
The objective of this project is the understanding of microscopic mechanisms which are at the origin of the final structure of shocked meteorites. Recent works have shown that classical molecular dynamics (MD) is a powerful tool to investigate the interaction of a shock wave with material structures, allowing for the validation of the models used in macroscopic scale codes. Nevertheless MD simulation must use enough atoms to be representative, because a shock wave is a collective process and the simulated defect structures must be realistic. Moreover the inter-atomic interactions must restore the material properties in a large thermodynamic field (material at rest, shocked state, very fast release) leading to rather complicate (and so costly) formulations.
So we propose large scale MD simulations (i) to explore the trapping process of atmospheric gas, (ii) to look at the formation of unknown phases of carbon trapped in veins and (iii) to check global thermodynamic models connecting shock pressure, porosity and temperature (see reference 1). The first two parts are very similar calculations, only the vein material is changed. These simulations require a large number of atoms (about 10^8-10^9) and well adapted potential functions especially for carbon. The Stamp MD code of CEA includes well proven methods to simulate shock waves and a sophisticated potential function for carbon (LCBOPII). This code is highly parallelized and was tested up to 20,000 cores on the TERA-100 petaflopic computer of CEA-DAM. A simplified and small size geometry (a simple spherical pore including gas, about 10^8 atoms) together with a general procedure was successfully tested on the PRACE Curie computer (Preparatory Access 2010PA0340, see attached file). To reach the relevant scale and structure complexity together with complicated materials, a Tier0 calculator is now needed in the framework of a PRACE proposal.
Resource awarded: 10 000 000 core-hours at CURIE Thin Node partition, GENCI@CEA
Large scale blood flow simulations: bridging scales by smart coarse graining
Project leader: Federico Toschi, Eindhoven University of Technology, The Netherlands
Collaborators: Jens Harting, Florian Janoschek, Prasad Perlekar, Ariel Narvaez, Stefan Frijters, Badr Kaoui, Timm Krüger, Eindhoven University of Technology, The Netherlands
Simulation of human blood flow is a true multi-scale problem: in first approximation, blood may be treated as a dense suspension of highly deformable red blood cells (RBCs) in plasma. The particulate nature requires to resolve the complex properties of individual cells. On the other hand, in realistic vessel geometries typical length scales vary over several orders of magnitude. Current models either treat blood as a homogeneous fluid and cannot track particulate effects or describe a relatively small number of RBCs with high resolution, but are not able to reach relevant time and length scales.
We developed a highly efficient particulate model that allows the simulation of millions of cells while also properly describing the plasma flow field. Our method combines a lattice Boltzmann solver to account for hydrodynamic interactions and a molecular dynamics code with anisotropic model potentials to cover the more complex short-range interactions of the cells. The aim of our large scale simulations is to study phenomena due to clogging of cells and blocking of vessel branches which are relevant in case of diseases such as hypoxic ischemia or thrombosis.
The current proposal focuses on additional studies of the collective behaviour of tumbling and tank treading red blood cells. Further, we plan to improve our cell model by adding a limited description of deformability and tank-treading and compare our model to simulations of fully resolved deformable red blood cells which are modeled by a combined lattice Boltzmann/immersed boundary method. The aim of this work is to investigate the applicability of on the fly resolution adaption for blood flow simulations: in dependence on the local degree of confinement, it should be feasible to modify the degree of coarse graining between a model with highly resolved single cells, our coarse grained model, or a pure continuum approach. Further, the high resolution simulations can be used to further improve the cell-cell interaction models of the low resolution model.
Resource awarded: 43 000 000 core-hours at JUGENE, GCS@Juelich
Joint Weather and Climate High-Resolution Global Modelling: Future Weathers and their Risks
Project leader: Pier Luigi Vidale, National Centre for Atmospheric Science, UK
Collaborators: Malcolm Roberts, Matthew Mizielinski, Met Office Hadley Centre, UK / Reinhard Schiemann, Marie-EstelleDemory, Jane Strachan , National Centre for Atmospheric Science, UK
Overall scientific aim: to increase fidelity of global climate simulations and our understanding of weather and climate risk, by representing fundamental weather and climate processes more completely. This will test and enhance our confidence in projections of climate change, including extremes.
Resolving weather features is vital if global climate models are to produce realistic simulations of the mean climate, variability and extremes, particularly at regional and local scales. High-resolution has been shown to improve simulation of high-impact events, such as tropical cyclones, European blocking and associated European summer ( 2003 and 2010) and winter extremes (2009 and 2010). Urgent interest lies in the potential of high-resolution models to help the scientific community understand potentially connected high-impact events, such as the 2005 hurricane season and drought in the Amazon, 2010 Russian heatwave and the Pakistan floods, and the influence of retreating Arctic sea-ice on European climate. The use of relatively low-resolution models hinders the investigation of such teleconnections, due to poor simulation of key processes, such as blocking frequency and organised convection.
We require high-resolution Global Climate Model experiments to address our main question: can we reproduce these related events and their (changing) risk to society, and gain an understanding of how and why they are causally connected globally?
Specific scientific questions:
- what processes emerge at high-resolution and what is their role in the climate system; are they causally interconnected and what is their combined impact ?
- Is there evidence of a regime-change at some crucial resolution when simulating aspects of the climate system ?
- Is there evidence of emerging cumulative risk and/or of highly-correlated risk ?
We propose to perform process-based detection and attribution of high impact weather and climate events and gain an understanding of how they are causally connected, by running ensembles of high-resolution climate simulations as summarised below.
- 5-member ensemble integrations of the Met Office Hadley Centre global climate model HadGEM3 at 25km horizontal resolution with:
- present day forcings;
- time-slice (climate change) forcings.
- Utilising state-of-the-art OSTIA 1/20 degree SST and sea-ice forcings for the period 1985 to present,
- Model atmosphere extends to 85km, with increased resolution in the stratosphere, crucial for representation of remote dynamical forcings.
The HadGEM3 model has never been run extensively in climate mode at this resolution, except for our DEISA tests, but essentially the same model is currently used in daily global weather forecasting. The proposed integrations are fully coordinated with, and an expansion of the existing, fully traceable, chain of model resolutions from 270km to 40km, which will provide a hugely valuable resource for current and future climate analysis and process understanding. This hierarchy of complementary simulations will be submitted to the IPCC AR5/CMIP5 project, allowing future PRACE results to be analysed with reference to an existing, high-impact international climate modeling initiative. We aim to continue developing our modeling capability and to aim for even higher global resolution, all the way to 12km, which is not even envisioned for global weather forecasting before 2015.
Resource awarded: 144 565 862 core-hours at HERMIT, GCS@HLRS