Find below the allocations of the PRACE 5th Project Access Call (in alphabetical order of the name of the project leader).
MHDTURB – Nonuniversal statistics in Magnetohydrodynamic turbulence
Project leader: Aleksandros Alexakis, Ecole Normal Superieure, France
Collaborators: Vassilios Dallas, Ecole Normal Superieure, France
AbstractThe nature of interactions between different scales in magnetohydrodynamic (MHD) turbulence is paramount for the understanding of flows occurring in fusion plasmas, the geospace, the heliosphere, the interstellar medium, and their influence for example on cosmic ray propagation and the solar-terrestrial interactions. The different degrees of nonuniversality observed at present studies, and the existence of nonlocal processes in MHD turbulent flows, call for a discussion about the validity of universality in MHD turbulence. We try to gain insight into this problem by performing high fidelity computations of MHD turbulence on massive parallel computers. We focus on the origins of nonuniversal statistics that are hidden in the complex dynamics of MHD turbulence, such as turbulent cascades, scale interactions, energy transfer and dissipation. We examine the long-range interactions, which are associated with departures from universality and can lead to the persistence of small-scale anisotropies and the development of intermittency.
Evidence for lack of universality was reported only three years ago for freely decaying MHD turbulence, whereas for the forced case it is still controversial. Moreover, theoretical support for these recent results do not exist yet. Therefore, this proposal timely combines computationally demanding simulations and theoretical approaches to investigate the nonuniversal statistics in both forced and freely decaying MHD turbulence. We believe that high resolution computations are unavoidable in order to be able to account for the causes of the lack of universality in the statistics of MHD turbulence.
This project is expected to considerably enhance European research scientific excellence because it will contribute with its state of the art approach in a vital scientific topic for Europe by providing a deeper knowledge on fundamental issues of fusion and astrophysical plasmas. These are challenges that Europe is facing with increasing concern in recent years and where significant resources are being invested.
Resource awarded: 8 500 000 core-hours on JUQUEEN (GAUSS@FZJ, Germany)
Pushing the Strong Interaction past its Breaking Point Phase II: Towards Quantitative Understanding of the Quark-Gluon Plasma.
Project leader: Chris Allton, Swansea University, United Kingdom
Collaborators: Gert Aarts, Swansea University, United Kingdom | Pietro Giudice, Swansea University, United Kingdom | Simon Hands, Swansea University, United Kingdom | Michale Peardon, Trinity College Dublin, Ireland | Sinead Ryan, Trinity College Dublin, Ireland | Jonivar Skullerud, National University of Ireland, Ireland
AbstractThere are four fundamental forces that describe all known interactions in the universe: gravity; electromagnetism; the weak interaction (which powers the sun and describes most radioactivity); and, finally the strong interaction – which is the topic of this research. The strong interaction causes quarks to be bound together in triplets into protons and neutrons, which in turn form the nucleus of atoms, and therefore make up more than 99% of all the known matter in the universe. If there were no strong interaction, these quarks would fly apart and there’d be no nuclei, and therefore no atoms, molecules, DNA, humans, planets, etc.
Although the strong interaction is normally an incredibly strongly binding force (the force between quarks inside protons is the weight of three elephants!), in extreme conditions it undergoes a substantial change in character. Instead of holding quarks together, it becomes considerably weaker, and quarks can fly apart and become “free”. This new phase of matter is called the “quark-gluon” plasma. This occurs at extreme temperatures: hotter than 10 billion Celsius. These conditions obviously do not normally occur – even the core of the sun is one thousand times cooler! However, these temperatures do occur naturally in at least two places: inside neutron stars (which are the small, but very dense remnants of supernovae) and just after the Big Bang (when the universe was a much hotter, smaller and denser place than it is today).
As well as in these situations in nature, physicists can re-create a mini-version of the quark-gluon plasma by colliding large nuclei (like gold) together in a particle accelerator at virtually the speed of light. This experiment is currently being performed at the Large Hadron Collider in CERN. Because each nucleus is incredibly small (100 billion of them side-by-side would span a distance of 1mm) the region of quark-gluon plasma created is correspondingly small. The plasma “fireball” also expands and cools incredibly rapidly, so it quickly returns to the normal state of matter where quarks are tightly bound. For these reasons, it is incredibly difficult to get any information about the plasma phase of matter.
To understand the processes occurring inside the fireball, physicists need to know its properties such as viscosity, pressure and energy density. It is also important to know at which temperature the quarks inside protons and other particles become unbound and free. With this information, it is possible to calculate how fast the fireball expands and cools, and what mixture of particles will fly out of the fireball and be observed by detectors in the experiment.
This research project will use supercomputers to simulate the strong interaction in the quark-gluon phase. We will find the temperature that quarks become unbound, and calculate some of the fundamental physical properties of the plasma such as its conductivity and viscosity. These quantities can then be used as inputs into the theoretical models which will enable us to understand the quark-gluon plasma, i.e. the strong interaction past its breaking point.
Resource awarded: 17 500 000 core-hours on FERMI (Cineca, Italy)
Carbon dioxide capture and release in amine solutions: Learning and understanding fromlarge-scale ab initio simulations
Project leader: Wanda Andreoni, Ecole Polytechinque Federale de Lausanne (EPFL), Switzerland
Collaborators: Fabio Pietrucci, Ecole Polytechinque Federale de Lausanne (EPFL), Switzerland | Gregoire Gallet, Ecole Polytechinque Federale de Lausanne (EPFL), Switzerland | Changru Ma, Ecole Polytechinque Federale de Lausanne (EPFL), Switzerland
AbstractThe disastrous impact of anthropogenic carbon dioxide (CO2) emissions on the environment is very well known. The most mature technology for post-combustion CO2 capture, currently in use in the chemical industry, exploits a cyclic process, in which CO2 is selectively and reversibly absorbed in an amine (aqueous) solution. However, the operating costs are still too high to allow for large-scale implementation. A large empirical effort is ongoing worldwide prima
rily to reduce the high energy penalty required for amine regeneration and to increase the rate of CO2 absorption into the solvent. There is a strong need for a quantitative characterization of the chemical reactions in all these processes and for a fundamental understanding of the microscopic mechanisms involved. Computer molecular simulations have the potential to assist us in the selection and optimization of the best engineering solutions by providing new insights on the relevant chemical processes and, at least in part, by delivering the quantitative information needed. In order to obtain predictive outcome, however, some requirements are mandatory: the use of realistic (large-scale) modelling and reliable methods as well as continuous control of the accuracy of the computational algorithms and of the analysis procedures. We will investigate the reference amine solution in industrial applications (as composition and concentration) with the aim of identifying the physico-chemical factors influencing the thermodynamics and kinetics of the key processes for the capture and release of carbon dioxide. Our research benefits from frontier simulation methods and advanced software. In particular, it exploits the power of ab initio molecular dynamics, aided by accelerated sampling for rare events as provided by metadynamics techniques. Our calculations will rely on the CPMD code that has the highest performance worldwide for this type of calculations on Blue Gene architectures.
Resource awarded: 59 800 000 core-hours on JUQUEEN (GAUSS@FZJ, Germany)
SIBEL1 – Cosmological Simulations Beyond Lambda-CDM
Project leader: Marco Baldi, Alma Mater University of Bologna, Italy
Collaborators: Lauro Moscardini, Alma Mater University of Bologna, Italy | Federico Marulli, Alma Mater University of Bologna, Italy | Carmelita Carbone, Alma Mater University of Bologna, Italy | Carlo Giocoli, Alma Mater University of Bologna, Italy | Stefano Borgani, INAF Trieste, Italy | Matteo Viel, INAF Trieste, Italy | Klaus Dolag, Ludwig-Maximilians University Munich, Germany | Tommaso Giannantonio, Ludwig-Maximilians University Munich, Germany | Giuseppe Murante, INAF Torino, Italy | Weiguang Cui, INAF Trieste, Italy
AbstractThe nature of the Dark Sector of the Universe is presently still completely unknown and represents one of the most intriguing open problems of modern physics, motivating a large number of ambitious and highly demanding experimental and observational initiatives worldwide. The present research project belongs to this general context as it is part of a wide and long-term scientific effort of the European astrophysical community in preparation of the satellite mission Euclid, which has been recently approved by the European Space Agency (with launch expected in 2019). The Euclid mission has as its primary goal the investigation of the mysterious Dark Energy field that permeates the Universe and that sources its observed accelerated expansion. All the scientists participating to this project are active members of the Euclid Consortium, and hold coordination responsibilities in several areas of the collaboration. In particular the P.I. of the project, Dr. Marco Baldi, is member of the Theory Working Group and of the Cosmological Simulations Working Group, and is Coordinator for the development of “Numerical simulations of non- standard cosmological models”. Our project aims to start the complex and demanding task of performing such cosmological N-body simulations by selecting two specific realizations of non-standard cosmologies among the several which have been selected within the Euclid collaboration for which suitable N-body algorithms have already been developed and sufficiently tested in the last few years. In particular, we will focus on interacting Dark Energy models and on primordial non-Gaussianity by running large and detailed cosmological simulations in the context of these scenarios, significantly extending previous works both in terms of the range of models investigated and of the numerical accuracy of the simulations. Furthermore, we aim to explore possible degeneracies between these two non-standard extensions of the concordance cosmological model by running a series of combined simulations that will allow us to quantify the realistic constraining power of various observational probes, as e.g. the future Euclid satellite, on the parameters that characterize Dark Energy interactions and primordial non-Gaussianity. The latter approach has never been pursued before and represents a highly innovative feature of the present research project. We expect to run at least four large N-body simulations for interacting Dark Energy models and at least four for non- Gaussian scenarios, for a total of eight large production runs. Additionally, we will run a significant number of smaller simulations to sample the parameter space of the combined model in order to explore the possible degeneracy of the two non-standard scenarios. For such challenging numerical program we require about 15 Mio CPU hours on a top-level HPC infrastructure as the new FERMI Blue Gene Q machine at Cineca. This total computational budget is dominated by a few extremely large simulations that will be then complemented by a series of smaller runs aimed at sampling the parameter space for the combined model, and also includes all the basic post-processing analysis envisaged for the scientific exploitation of the simulations by the Euclid community.
Resource awarded: 8 400 000 core-hours on MareNostrum (BSC, Spain)
HRPIPE, Direct Numerical Simulation of pipe flow at high Reynolds numbers
Project leader: Bendiks Boersma, Delft University of Technology, The Netherlands
Collaborators: Mathieu Pourquie, Delft University of Technology, The Netherlands | Rene Pecnik, Delft University of Technology, The Netherlands |
AbstractFrom an engineering point of view turbulent pipe flow is a one of the most important flow geometries, because of its wide range of technical applications. Although many engineering problems involving pipe flows can be solved by simple engineering correlations or by turbulence modes, there is considerable fundamental interest in turbulent pipe flow. One of the open questions is the scaling of turbulent statistics in pipe flows. For instance, in the past it has been argued that the peak of the axial root mean square (rms) value of the turbulent fluctuations is nearly constant and thus independent of the Reynolds number. Furthermore, there is some experimental indication that at higher Reynolds numbers long meandering structures will be generated. Until now the origin of these structures is unknown. Better knowledge of turbulence in pipe flows will help us to develop (control) techniques to decrease turbulent skin friction and to optimize turbulent heat and mass transfer.
Resource awarded: 12 400 000 core-hours on Hermit (GAUSS@HLRS, Germany)
SIMAC – Simulation of ignition mechanisms in annular multi-injector combustorsand comparison with experiments
Project leader: Matthieu Boileau, CNRS, France
Collaborators: Sébastien Candel, CNRS, France | Thomas Schmitt, CNRS, France | Ronan Vicquelin, CNRS, France | Bénédicte Cuenot, CERFACS, France | Gabriel Staffelbach, CERFACS, France | Eleonore Riber, CERFACS, France | Florent Duchaine, CERFACS, France
AbstractThe aim of the proposed project is to perform large eddy simulation of ignition in a multiple-injectors annular combustor. This topic is of fundamental interest for combustion science and has important practical applications for the design of innovative industrial gas turbines and aero-engines, in which ignition is a critical phase. For the first time, full burner ignition simulations will be perfectly validated by quantitative comparison with measurements from ignition tests in a transparent walls combustor (MICCA). This facility has been recently set up and operated to provide a detailed and unique view of the ignition process, never seen before. The project brings together two well known teams, EM2C and CERFACS, who have collaborated over many years and are currently successfully completing another PRACE project.
Resource awarded: 15 000 000 core-hours on Curie TN (GENCI@CEA, France)
Heavy ion phenomenology form lattice simulations
Project leader: Szabolcs Borsanyi, Bergische Universität Wuppertal, Germany
Collaborators: Claudia Ratti, University of Torino, Italy | Kalman Szabo, Bergische Universität Wuppertal, Germany | Zoltan Fodor, Bergische Universität Wuppertal, Germany | Stefan Krieg, Forschungszentrum Juelich, Germany |
AbstractIn this project we calculate the theoretical predictions of the Standard Model for the fluctuations of charge carriers in heavy ion collision experiments at the Large Hadron Collider. We determine the non-Gaussianity of these fluctuations which are among the most relevant observables that mark the formation of the quark gluon plasma phase. By this we provide a theoretical description for the re-hadronization of the produced plasma, namely the instant when the detectable particles in the experiment’s output are created. To this end we use ab initio lattice simulations with high statistics to determine the response of the system to small variations in the quark chemical potential. By surveying the fluctuations over a temperature range spanning between the plasma and hadron phases we will be in the position to assess the range of applicability of the most widely used thermodynamics model, the Hadron Resonance Gas. Through these efforts large scale simulations can provide a point of direct comparison between fundamental theory, model and experiment.
Resource awarded: 91 791 360 core-hours on JUQUEEN (GAUSS@FZJ, Germany)
JNFLAC – Jet-noise reduction by fluidic active control
Project leader: Jean-François Boussuge, CERFACS, France
Collaborators: Hughes Deniau, CERFACS, France | Marc Montagnac, CERFACS, France | Michel Gazaix, Onera, France | Christophe Bogey, CNRS, France
AbstractIn take-off configuration, the jet noise induced by the engine represents the main part of the noise radiated by a civil aircraft. Innovative devices such as fluidic systems are then investigated in order to reduce this noise. Different devices have been experimentally studied such as solid chevron or micro-jets. Such devices lead to a reduction of about 2-3 dB of the jet noise. If configurations with solid chevrons have been widely studied, few numerical simulations deals with configurations with micro-jets since they require a simulation much more demanding. This kind of technology acts on the propulsive jet by increasing mixing between the primary jet flow and micro-jet flows. However, physical mechanisms related to this control method are still not fully understood. Due to the strong impact of turbulence on the studied phenomena, numerical simulations are based on the Large Eddy Simulation method to predict the near acoustic field. The Ffowcs Williams-Hawkings Analogy is then used for far field predictions. The objective of the present study is to explain the physical phenomenon responsible of the noise reduction.
Resource awarded: 15 000000 core-hours on Curie TN (GENCI@CEA, France)
Simulations of Turbulent, Active and Rotating Stars and of their Environment(STARS-E)
Project leader: Allan Sacha Brun, CEA-Saclay, France
Collaborators: Antoine Strugarek, CEA-Saclay, France | Rui Pinto, CEA-Saclay, France | Sean Matt, CEA-Saclay, France | Nicolas Bessolaz, CEA-Saclay, France | Olivier Do Cao, CEA-Saclay, France | Lucie Alvan, CEA-Saclay, France
AbstractThe multi-year STARS-E project aims at modelling on massively parallel computers in a self-consistent and three dimensional (3-D) way the complex time dependent and nonlinear processes operating in the Sun, the stars and their environment.
The project is split in two parts:A) Simulations of rotating convection and dynamo in solar-like starsB) Simulations of star-disk-wind interactions
We seek to understand how solar-like stars with the Sun being an archetype, rotate and generate their magnetic field and possess cyclic magnetic activity. We also seek to understand the rotational history of such stars from the early phase (T-Tauri) until the age of the Sun through both coupling to an accretion disk and braking by a stellar wind. We will use state-of-the-art 3-D MHD high performance numerical simulations to model solar-like stars and their enviroment using the MPI ASH and PLUTO codes.
Resource awarded: 3 000 000 core-hours on Curie FN, and 12 000 000 core-hours on Curie TN (GENCI@CEA, France)
MetaHMGB1 – Binding of the HMGB1 protein to platinated DNA: a metadynamicsstudy
Project leader: Paolo Carloni, German Research School for Simulation Sciences GmbH, Germany
Collaborators: Emiliano Ippoliti, German Research School for Simulation Sciences GmbH, Germany | Trung Hai Nguyen, German Research School for Simulation Sciences GmbH, Germany | Giulia Rossetti, Institute for Research in Biomedicine and Barcelona Supercomputing Center Joint Research Program on Computational Biology, Spain
AbstractThe approval by FDA of the drug cisplatin (cis-diamminedichloridoplatinum(II)) revolutionized anticancer therapy. The drug turns out to cure testicular cancers – the most common malignancy in 20 to 34-year-old men, as well as ovarian, cervical, colorectal cancers and relapsed lymphoma, with unprecedented potency. Yet, cisplatin-based treatments turn soon to suffer from several drawbacks including intrinsic or acquired resistance, drastically reducing the efficacy of the drug. Resistance to c
isplatin is due to a number of different complex factors, whose molecular details are known only in part. Dissecting the molecular facets of cisplatin resistance is therefore required to rationally design novel Pt-compounds not afflicted by these undesirable effects. This issue is particularly stringent for the EU, where cancer is the second highest cause of death. At the molecular level, it has been established that cisplatin induces its beneficial activity by covalently binding to DNA, causing DNA damages. These DNA damages in turn inhibit replication and transcription of DNA. The inhibition of these vitally important cellular processes leads to the so-called “apoptotic death” of tumor cells. In some of the key resistance mechanisms, the so-called nucleotide excision repair (NER) enzymes manage to remove the cisplatin-damaged nucleotide bases in the DNA, allowing the cancerous cell to survive. The so-called High Mobility Group Box 1 protein (HMGB1) might be able to prevent NER repair by binding to cisplatin-DNA adducts and shielding them from the NER enzymes. Determining the molecular mechanism of HMGB1 binding to the cisplatin-damaged DNA is a fundamental step towards better understanding the molecular basis of action as well resistance mechanism of cisplatin. Structural determinants and the energetics have been reported for the cisplatin-(d- (CCUCTCTG*G*ACCTTC)-d(GGAGAGACCTGGAAGG)) in complex with HMGB1 domain A which is a major complex between cisplatin-damaged DNA and HMGB1 protein. Here we plan to gain a comprehensive structural and energetic insight into the binding mechanisms of platinated-DNA to HMGB1 by using a set of computational tools including variants of the replica exchange molecular dynamics and metadynamics-based free energy methods. These computational techniques allow us to study the conformation of the cisplatin-(d-(CCUCTCTG*G*ACCTTC)-d(GGAGAGACCTGGAAGG)) in complex with HMGB1 domain A. They can also give us an energetically detailed picture of the binding process through the free energy profile, which will be quantitatively related to the experimental binding affinity. This work may consider advances our knowledge on cisplatin-drug resistance.
Resource awarded: 2 776 133 core-hours on Curie TN (GENCI@CEA, France)
PROMEMB – Why is proton migration so fast at the lipid membrane interface? An ab initiomolecular dynamics study.
Project leader: Paolo Carloni, German Research School for Simulation Sciences GmbH, Germany
Collaborators: Emiliano Ippoliti, German Research School for Simulation Sciences GmbH, Germany | Chao Zhang, German Research School for Simulation Sciences GmbH, Germany | Jens Dreyer, German Research School for Simulation Sciences GmbH, Germany
AbstractProton production and consumption are key steps in any bioenergetics process. For instance, the synthesis of adenosine triphosphate (ATP), the free energy carrier in living systems, would cease if specific membrane-bound enzymes (proton pumps and ATP synthase as the proton source and the proton sink, respectively) stop creating and consuming a transmembrane proton gradients. Fast proton migration at phospholipid membrane surfaces establishes an efficient link between the sink and the source. Protons can indeed migrate long distances (over tens of micrometers) in short time (over tens and hundreds of milliseconds) on the membrane’s hydration layer. This fundamentally important process has prompted the question how protons can travel so efficiently from the producer to the consumer along the lipid membrane interface. Computational studies based on a multistate empirical valence bond (MS-EVB) model reported free energy minima for proton localization near different lipid membranes with a depth ranging from 6.7 8.3 RT. Protons were found to move essentially with the lipid, which contrasts with the experimental observation that the proton diffuses orders of magnitude faster. In addition, the phosphate group might not only play a role as an electrostatic attraction site, but also participate actively in the proton migration process. Recent ab initio molecular dynamics (MD) simulations point to i) the formation of extended hydrogen-bonded (“Grotthuss”) chains is responsible for correlated proton transfers in phosphoric acid; ii) water molecules bridging adjacent acid sites in vinyl phosphonic acid polymers facilitate the liquid/water interfaces, in spite of the fact that they are localized right there. We found protons localize either (a) in direct contact with n-decane (and hence have a reduced mobility) or (b) in the second interfacial hydration layer, where they diffuse as fast as in bulk water. Obviously, it is not clear whether in the presence of the phospholipids this picture still holds or a different mechanism of fast proton diffusion at the surface will take place. To address this fundamental issue, we propose here to use full ab initio metadynamics simulations to calculate the free energy of an excess proton at the phosphate/water lipid membrane interface. The calculations will unravel the kinetics of proton localization on the interface and its exchange with bulk water.
Resource awarded: 64 917 518 core-hours on JUQUEEN (GAUSS@FZJ, Germany)
BIKI – Estimating binding kinetics of enzyme inhibitors: the Cyclooxygenases case
Project leader: Andrea Cavalli, Università di Bologna, Italy
Collaborators: Walter Rocchia, Italian Institute of Technology, Italy | Sergio Decherchi, Italian Institute of Technology, Italy | Jagdish Patel, Italian Institute of Technology, Italy | Francesco Di Leva, Italian Institute of Technology, Italy | Vittorio Limongelli, University of Naples “Federico II” Faculty of Pharmacy, Italy
AbstractThe affinity of a drug for a molecular target and its pharmacokinetic profile are the most important requisites to determine the therapeutic effectiveness of a drug. For this reason, the estimation of affinity-related properties of new molecules, such as dissociation constants for receptor ligands or inhibition constants for enzyme inhibitors, is extremely useful in the selection of candidates for further development in the early stages of drug discovery. The possibility to save time and money in this process has prompted the development of several computational techniques aimed at predicting these quantities with a good level of accuracy. Interestingly, it has recently emerged that drug binding kinetics (i.e. kon and koff of interaction), which is strictly related to the time a drug spends in contact with its target, is at least as important as binding affinity. However, the drug-target binding kinetics of new molecules is seldom investigated, and occasionally characterized retrospectively in postprocessing analyses. At present, several factors limit the in silico technologies in the estimation of such properties, and establishing a relationship between binding kinetics and chemical structures with high predictivity remains so far a chimera. In the present project, prompted by the solid background that we achieved studying the molecular details of binding between Cyclooxygenases (COX-1 and COX-2) and some potent inhibitors thereof, we propose to use some of the most advanced computational techniques to define the ligand/protein interaction free energy profile at high level of accuracy for some of these systems. This will allow estimating many important prope
rties of these inhibitors, including binding affinity, but with a specific focus on the kinetic parameters.
Resource awarded: 32 000 000 core-hours on Fermi (Cineca, Italy)
Simulation of ship survivability under wave impact
Project leader: Matthieu de Leffe, HydrOcean, France
Collaborators: David Guibert, HydrOcean, France | Julien Candelier, HydrOcean, France | Pierre-Michel Guilcher, , HydrOcean, France | Guillaume Oger, Ecole Centrale de Nantes, France | Nicolas Grenier, Ecole Centrale de Nantes, France | David Le Touzé, Ecole Centrale de Nantes, France
AbstractToday, 95% of freight is carried by sea, involving crucial issues on the safety of persons and equipments, but also on the ecological impacts incident. To improve the safety of these structures, the design techniques must be continuously improved. Among the tools available in engineering, computer simulation has now become almost inevitable. The numerical methods used today are mainly mesh methods, which may have certain limitations regarding large deformations of fluid and structures coupled. These problems may occur mainly at large stresses, such as storms. The SPH method overcomes these problems inherent to meshes, and can respond appropriately to such problems.
This project aims to assess the efforts suffered by marine structures on three scales:• A small-scale taking into account the effects of air and the coupling between structure and fluid.• A medium scale with the effects of slamming• A large scale with the effects of green water
Resource awarded: 5 000 000 core-hours on Hermit (GAUSS@HLRS, Germany)
Project leader: Marco De Vivo, Italian Institute of Technology, Italy
Collaborators: Marino Convertino, Italian Institute of Technology, Italy | Giulia Palermo, Italian Institute of Technology, Italy | Anna Berteotti, Italian Institute of Technology, Italy
AbstractHere, we focus on Type II topoisomerase (topoII) proteins, which control the topology of DNA in all cells and are important targets of clinical antibiotics and anticancer agents. Based on recent crystal structures of the tertiary complex topoII-DNA-drug, this proposal will enable progress in understanding the binding mode and key interactions of potent anticancer drugs – namely etoposide and F14512 – that target topoII, providing useful details on their mechanism of action. Accordingly, we propose to employ state-of-the-art computational methodologies that will involve enhanced sampling simulations, which will return a detailed picture of the free energy landscape of drug binding to topoII. Importantly, clarification of the dynamics of the catalytic site structural determinants of topoII during drug binding could also offer insights for the rational structure-based design of new anticancer and antibacterial drugs.
Resource awarded: 5 000 000 core-hours on Curie FN (GENCI@CEA, France)
Large scale molecular dynamics simulations of nucleation
Project leader: Jürg Diemand, University of Zürich, Switzerland
Collaborators: Hidekazu Tanaka, Hokkaido University, Japan | Kyoko Tanaka, Hokkaido University, Japan | Raymond Angelil, University of Zürich, Switzerland
AbstractNucleation is the local emergence of a minute, distinct thermodynamic phase, for example the creation of nanoscale liquid droplets or solid clusters out of a supersaturated vapor phase. Nucleation plays an important role in many areas of science and technology, for example in weather and climate science (cloud formation), in material science (nano-particle formation) and in astrophysics (dust formation around dying stars). However our current understanding of the process at a fundamental molecular level is still rather limited: The macroscopic classical nucleation theory does not match experimental and simulation results very well. The classical theory has recently been superseded by several semi-phenomenological models which allow better fits to some of the data, however it is currently unclear how well they work beyond their original parameter ranges. Nucleation out of a homogeneous gas phase is suppressed by the Kelvin effect: growing the surface of small nano-scale droplets below some critical size takes more energy than what is gained by the additional liquid volume and such small droplets are more likely to evaporate than to grow. This allows homogenous substances to remain in the gas phase at higher pressures, i.e. they can reach a metastable supersaturated state. Molecular dynamics (MD) simulations of nucleation are very much akin to laboratory nucleation experiments. Additionally they allow for the detailed direct tracking of the cluster formation process. Unfortunately, due to computational limitations, MD simulations have been limited to far higher supersaturations and nucleation rates, and to far lower critical clusters than those probed by real experiments. PRACE resources will allow us to finally close the gulf between numerical simulations, laboratory experiments, and phenomenological models through very large scale MD nucleation simulations. The current state-of-the-art MD simulations of nucleation have up to 100’000 particles. Our proposed runs have up to 8 billion particles and up to 100 million time-steps. They can run efficiently on 32’000 cpus. We use the highly optimised code LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator), together with our parallel analysis code 6DFOF, which was originally developed and used for the analysisof large cosmological simulations.
Resource awarded: 35 300 000 core-hours on Hermit (GAUSS@HLRS, Germany)
ArtificialLeaf – Shedding light on the catalytic core of artificial leaf technologies
Project leader: Stefano Fabris, CNR-IOM DEMOCRITOS and SISSA, Italy
Collaborators: Alessandro Laio, SISSA, Italy | Karolina Kwapien, CNR-IOM DEMOCRITOS and SISSA, Italy | Xiao Hu, CNR-IOM DEMOCRITOS and SISSA, Italy
AbstractSolar-driven water splitting is a key photochemical reaction in solar fuel production. Precious metal oxides, such as RuO2 and IrO2, efficiently catalyze water oxidation Â– the main bottleneck for the overall process – but a realistic impact on large-scale energy conversion requires novel catalysts based on earthabundant elements. A class of such materials – cobalt-phosphate (Co-Pi), cobalt-borate, nickel-borate, and others – have been recently discovered and successfully applied to artificial leaf technologies. Their activity relies on ordered and oxidation-resistant active centers embedded in amorphous grains. The structure of these active cores is debated and experimentally elusive because of the complex amorphous structure and composition of the grains. The lack of a well-defined structural models of these materials hinders the understanding of their function, namely the electrochemical oxidation of water. Our recent computational study provides the first realistic and statistically meaningful structural model of the Co-Pi catalyst, and opens the way for understanding the functionality of these catalysts. This project builds on this fundamental knowledge of the structure and aims at providing the first mechanistic study of the electrochemical water oxidation reaction catalyzed by these active cores. If successful, our study will allow to rationalize spectroscopic and electrokinetic measurements and to reveal the origin of the catalytic efficiency of these novel water-oxidation catalysts. Moreover, our computational study will identify correlations between the mechanism of reaction, thermodynamic efficiency, and local structure of the active sites, thus providing useful guidelines for the rational design of superior catalysts for the direct conversion of solar energy into fuels.
Resource awarded: 32 500 000 core-hours on Fermi (Cineca, Italy)
QMC_MEP – Reaction pathways by Quantum Monte Carlo: from benchmarks tobiochemistry
Project leader: Leonardo Guidoni, Sapienza- Univ. Roma & Università degli Studi de L’Aquila, Italy
Collaborators: Emanuele Coccia, Università degli Studi de L’Aquila, Italy | Daniele Narzi, Università degli Studi de L’Aquila, Italy | Andrea Zen, La Sapienza, Università di Roma, Italy | Daniele Varsano, La Sapienza, Università di Roma, Italy | Matteo Barborini, Università degli Studi de L’Aquila, Italy | Sandro Sorella, SISSA, Italy | Daniele Bovi, La Sapienza, Università di Roma, Italy
AbstractThe calculation of energy barriers in chemical reactions represents a fundamental step for the understanding and the rationalization of reaction pathways and catalytic strategies. For transition states electronic correlation often plays a crucial role and is therefore necessary to go beyond Density Functional Theory methods to properly evaluate the energetic and the structural properties. The use of correlated quantum chemistry techniques is therefore mandatory, although they are limited to small systems due to the unfavorable scaling with the system size. Quantum Monte Carlo (QMC) methods are a promising technique for the study of the electronic structure of correlated molecular systems. QMC algorithms are highly parallel in nature and thanks also to the relatively small memory requirements also for large systems, they have good performances and scalability for highly parallel computers. Furthermore, the good scaling properties of the algorithms with the system size (N^3 – N^4, with N the number of electrons) make QMC methods competitive with respect to other correlated computational chemistry tools for large systems. Recent implementations in the TurboRVB code are able to calculate in an efficient and scalable way the ionic forces, providing us with the possibility to study reaction pathways of molecules at the Variational Monte Carlo level. In the present project we will estimate, for the first time, reaction energy barriers and molecular reaction pathways using QMC techniques together with the NEB method for finding Minimum Energy Paths. We have chosen well-studied chemical systems of fundamental interest: the cyclization of 1,3-butadiene, two models of Diels-Alder reactions, a SN2 reaction and the peptide bond formation in ribosome. Diels- Alder reactions are of essential importance in organic synthesis since they represent a powerful case of concerted formation of two single carbon-carbon bonds. We will study two textbook prototypes of Diels- Alder reactions: the butadiene+ethylene and butadiene+formaldehyde cyclo-additions. The SN2 substitution reaction involving OH- and CH3F represents a robust benchmark for our computational strategy. The biochemical reaction is characterized, in the model we will use, by 50 atoms and 144 valence electrons, a strong challenge for highly correlated quantum chemistry methods and particularly suitable to Tier-0 systems. Despite such reactions have been extensively studied so far by several quantum chemistry methods, different approaches provide different estimates of energies and transition state geometries, in several cases leading to results at variance with experiments. The present project will demonstrate that QMC methods can provide an accurate correlated method for the calculation of reaction barriers, alternative to more traditional correlated quantum chemistry techniques.
Resource awarded: 6 000 000 core-hours on Curie FN (GENCI@CEA, France)
Plasmonic ligand-stabilized gold nanoclusters
Project leader: Hannu Häkkinen, University of Jyväskylä, Finland
Collaborators: Sami Malola, University of Jyväskylä, Finland | Ville Mäkinen, University of Jyväskylä, Finland | Jussi Enkovaara, CSC – the Finnish IT Center for Science, Finland
AbstractStabilizing gold nanoparticles by thiolate ligand molecules is a well-known synthetic route to produce airstable, electrochemically and thermally stable cluster compounds with tunable sizes and properties in the nanometer scale. These cluster compounds constitute a class of very interesting novel nanomaterials that have been employed in a wide range of studies in molecular biology, inorganic chemistry, surface science and materials science with a wide range of potential applications in site-specific bioconjugate labelling, drug delivery and medical therapy, functionalisation of gold surfaces for sensing, molecular recognition and molecular electronics and nanoparticle catalysis. A detailed understanding of the emergence of collective excitations in metallic nanostructures has been an open challenge in solid-state chemistry and physics. Through the computational studies of ligand-stabilized gold nanoclusters that are defined “to the molecular precision”, i.e, by exact composition and structure, this project aims at breakthroughs in microscopic understanding of the “birth of a plasmon” in nanoscale noble metal clusters. This is of a wide scientific interest, since it will answer fundamental questions pertaining to transformation of nanoscale matter and nanoparticles from “molecular” to “metallic” regime with the concomitant change of optical response of the electrons from discrete transitions to collective behavior. Plasmonics is a rapidly developing field of (nano-)optics with wide-ranging applications, consequently, finding out the fundamental limits to the miniaturization bears an obvious technical significance as well.
Resource awarded: 18 358 272 core-hours on Hermit (GAUSS@HLRS, Germany)
PAdDLES – p-Adaptive Discretisations for LES in turbomachinery
Project leader: Koen Hillewaert, Cenaero, Belgium
Collaborators: Corentin Carton de Wiart, Cenaero, Belgium
AbstractPAdDLES aims at providing proof of concept computations with a discontinuousGalerkin method based CFD code. It is developed in complement to currentindustrial codes, to allow for highly accurate and reliable large eddysimulations of turbomachinery flows. This capability is currently lacking, asthe state-of-the-art discretisations have not been designed with the requiredaccuracy in mind. The high-resolution academic codes used for the fundamentalstudy of turbulence on the other hand are not applicable to complex geometry.The discontinuous Galerkin method is a finite element method, based ondiscontinuous interpolation spaces. The method allows for a generic andvariable order of convergence on unstructured meshes and by consequence forgeneric geometry. Due to the data locality, the method can be implementedin a very flop-efficient way, and allows for massively parallel scaling. Thecombination of these features make the method an ideal candidate for futureCFD codes used for LES simulations, combining accuracy, efficiency andgeometric flexibility.The code has been assessed and compared to high-resolution finite-difference codes onacademic test cases, both for DNS and LES. During these tests, DGM providedsimilar solution quality, thus confirming the potential of the method. Anotherinteresting result is the good prediction of homogeneous turbulence andchannel flow obtained by the implicit LES approach. In this approach the LESmodel is replaced by the inherent dissipation of the method, hence removingthe need for the tuning or dynamic procedures as required by otherapproaches. If confirmed more generally, this would simplify computationsconsiderably. At the same time, DNS computations have been undertaken for transitional flows on airfoilsand low-pressure turbines for Reynolds numbers up to 85.000 duringthe PRACE industrial pilot project “noFUDGE”.The first objective of PAdDLES is the further assessment of LES models for DGM, in particular the ILESapproach, on high-Reynolds number wall-bounded flows. Different models and mesh resolutions will betested on the classic channel flow at Re=950 and2000 (based on skin friction) and compared to reference DNS results. These conditions correspond to thehighest Re for which reference data are available.The second aspect is the use of local order-adaptation to capture turbulent flowfeatures more efficiently. As the location of complex turbulent flow features is difficult to predict, and often time-dependent, adaptive discretisations will allow to avoid generalised high resolution. In the case of smooth yet complex features, order adaptation is much more effective than the adaptation of mesh resolution. Off-line order adaptation will be tested on the direct numerical simulation of an LP turbine cascade at Re=120.000 and the large eddy simulation of the same cascade at Re=250.000. The computations will be compared to measurements performed at the von Karman institute.
Resource awarded: 14 250 000 core-hours on JUQUEEN (GAUSS@FZJ, Germany)
Modeling gravitational wave signals from black hole binaries
Project leader: Sascha Husa, Universitat de les Illes Balears, Spain
Collaborators: Sara Gil Casanova, Universitat de les Illes Balears, Spain | Alex Vaño Viñuales, Universitat de les Illes Balears, Spain | Milton Ruiz, Universitat de les Illes Balears, Spain | Alicia Sintes, Universitat de les Illes Balears, Spain | Juan Calderón Bustillo, Universitat de les Illes Balears, Spain | Alejandro Bohe, Universitat de les Illes Balears, Spain | Denis Pollney, Rhodes University, South Africa | Mark Hannam, Cardiff University, United Kingdom | Patricia Schmidt, Cardiff University, United Kingdom | John Veitch, Cardiff University, United Kingdom | Ioannis Kamaretsos, Cardiff University, United Kingdom | B.S Sathyaprakash, Cardiff University, United Kingdom | Stephen Fairhurst, Cardiff University, United Kingdom | Michael Puerrer, Cardiff University, United Kingdom | David Hilditch, Friedrich-Schiller-Universität Jena, Germany | Sebastiano Bernuzzi, Friedrich-Schiller-Universität Jena, Germany | Marcus Thierfelder, Friedrich-Schiller-Universität Jena, Germany | Bernd Bruegmann, Friedrich-Schiller-Universität Jena, Germany | Nathan Johnson-McDaniel | Parameswaran Ajith, California Institute of Technology, United States | Christian Reisswig, California Institute of Technology, United States
AbstractOne century after Einstein’s general theory of relativity revealed space and time to be dynamical entities, gravitational research is about to be transformed once again. The first detection of gravitational waves(GW) will push open a new window on to the universe, comparable to the revolution brought about by thedevelopment of radio astronomy. Prime candidates for the first detection are catastrophic events involvingcompact relativistic objects and their strongly nonlinear gravitational fields, in particular the coalescence ofcompact binaries of black holes (BHs).In this project, we model these events and their GWs by solving the full nonlinear Einstein equations. Ourresults will help to identify the first such signals to be observed by advanced gravitational-wave detectors,and contribute to answering important open questions in astrophysics and fundamental physics, whereBHs have taken center stage.The experimental challenge to meet the tremendous sensitivity requirements of gravitational-wavedetectors is paralleled by a computational modeling challenge: The detection, identification, and accuratedetermination of the physical parameters of sources relies on the availability of reliable waveform templatebanks, which are used to filter the detector signals. For some sources, such as the slow inspiral of widelyseparated BHs, good analytical approximations for the gravitational waveforms are provided byperturbative post-Newtonian expansion techniques. For the last obits and merger, however, where thefields are particularly strong, and where one has the best chances of discovering entirely new physics, theEinstein equations have to be solved numerically. This is what we do. Over the last few years, we havedeveloped the techniques that render possible large-scale parameter studies of black-hole binaries, and tosynthesize the results into analytical template banks that describe the complete inspiral, merger andringdown of BH binaries. This project will make it possible to combine all these techniques with large-scaleparameter studies, in time to establish data analysis strategies for the advanced GW detectors that willcome online in 2014.With the aim of establishing a model for GW searches and source identification (parameter estimation) thatwill have maximal effect with the greatest efficiency, we have previously pioneered the construction ofanalytical template banks that “interpolate” numerical simulations, and which are already used foranalysing the data of the LIGO and Virgo detectors. To model precessing binaries, we plan to make use ofan elegant and powerful tool that we recently developed to simplify the representation of precessing-binarywaveforms. This method essentially “unwraps” the precession effects, removing in particular theamplitude and phase modulations that constitute the most complex features of precessing-binarywaveforms, leaving a far simpler waveform that can be mapped to the waveform from a non-precessingbinary with the same mass ratio and effective total spin. To produce such a model, we will proceed in twosteps: In the first year we will produce a very accurate non-precessing-binary model. In the second year weplan to perform a coarse but systematic mapping of the precessing parameter space to establish a firstsimple precessing model.
Resource awarded: 37 000000 core-hours on SuperMUC (GAUSS@LRZ, Germany)
LocalUniverse – Our Neighbourhood in the Universe: From the First Stars to the Present Day
Project leader: Ilian Iliev, University of Sussex, United Kingdom
Collaborators: Stefan Gottloeber, Leibniz-Institut fuer Astrophysik Potsdam (AIP), Germany | Steffen Hess, Leibniz-Institut fuer Astrophysik Potsdam (AIP), Germany | Noam Libeskind, Leibniz-Institut fuer Astrophysik Potsdam (AIP), Germany | Gustavo Yepes, Universidad Autónoma de Madrid, Spain | Alexander Knebe, Universidad Autónoma de Madrid, Spain | Daniel Severino, Universidad Autónoma de Madrid, Spain | Peter Thomas, University of Sussex, United Kingdom | William Watson, University of Sussex, United Kingdom | Aurel Schneider, University of Sussex, United Kingdom | Garrelt Mellema, Stockholm University, Sweden | Kyungjin Ahn, Chosun University, Republic of Korea | Paul Shapiro, The University of Texas at Austin, United States | Yi Mao, The University of Texas at Austin, United States | Mia Bovill, The University of Texas at Austin, United States | Anson D’Aloisio The University of Texas at Austin, United States | Yehuda Hoffman, Hebrew University of Jerusalem, Israel | Pierre Ocvirk, Strasbourg University, France | Dominique Aubert, Strasbourg University, France
AbstractReionization is believed to be the outcome of the release of ionizing radiation by early galaxies. Due to the complex nature of the reionization process it is best studied through numerical simulations. Such simulations present considerable challenges. The tiny galaxies which are the dominant contributors of ionizing radiation must be resolved in volumes large enough to derive their numbers and clustering properties correctly. The ionization fronts expanding from these galaxies into the surrounding neutral medium must then be tracked with a 3D radiative transfer method. The combination of these requirements makes this problem a formidable computational task. The Epoch of Reionization leaves imprints on the smallest galaxies that can still be observed today in the nearby universe. This ’Near-Field Cosmology’ therefore holds clues to the history of reionization and is a subject of great current interest. Our local volume is by far the best-studied patch in the Universe, with a wealth of data available. Reionization will have left useful fossil records (e.g. abundance of satellites; numbers and radial distribution of globular clusters and metal-poor stars) in the properties of our neighbourhood which will help us use local observations to understand the young universe. In this project we propose to perform several beyond-current-state-of-the-art constrained simulations of the local cosmic structures and their reionization. Our main goals are: 1) simulate the complete star formation and galaxy formation history of our local volume, from the very First Stars to the present day; 2) derive the complete reionization history of the Local Group with focus on internal (by own sources) vs. external (by nearby proto-clusters) scenarios and their observable consequences; 3) model in detail the effects of reionization on the number, distribution and star formation histories of the Local Group satellite galaxies and globular clusters; and 4) Cosmic archaeology: find the expected distribution of the surviving low-mass metal-free stars by tracking their parent halos to the present day. Our simulations will be the first ever to achieve these goals, resulting in significant breakthroughs in our understanding of the young Universe, unavailable by any other means. Achieving this will require performing one of the largest cosmological N-body simulation ever attempted, with about 550 billion (5.5×10^11) particles. This will be based on a constrained realisation of the Gaussian field of initial density perturbations. This technique allows imposing available observational data as constraints on the initial conditions and thereby yielding large-scale structures which closely mimic the actual nearby universe. In particular, constrained simulations reproduce the key structures of the local cosmic web of structures like the Local Group, Virgo and Coma clusters with sizes and relative positions which closely resemble the actual ones. We will then use a radiative transfer simulation to follow the reionization of this volume. This will allow us to study the “memories” of reionization remaining in our Local Group, for comparison with observations. We will then follow this up with semi-analytical modelling of the formation of galaxies in our volume through cosmic time.
Resource awarded: 26 000 000 core-hours on SuperMUC (GAUSS@LRZ, Germany)
Next generation lattice QCD simulations of the first two quark generations at the physical point
Project leader: Karl Jansen, NIC, DESY Zeuthen, Germany
Collaborators: Chris Michael, University of Liverpool, United Kingdom | Giancarlo Rossi, Universita di Roma Tor Vergata and INFN, Italy | Mariane Brinet, Laboratoire de Physique Subatomique et Cosmologie | Elisabetta Pallante, University of Groningen, Netherlands | Scorzato Luigi, University of Bern, Switzerland | Urs Wenger, University of Bern, Switzerland | Marc Wagner, Johann Wolfgang Goethe-Universität Frankfurt am Main, Germany | Carsten Urbach, Universitat Bonn, Germany | Constantia Alexandrou, University of Cyprus, Cyprus
AbstractThe here proposed project consists of two parts. The first is the generation of gauge field configurations with all quarks of the first two generations having their physical masses. These simulations are complemented by computations with four degenerate quark flavors needed for the renormalization program. The second part is the calculation of physical quantities on these gauge field configurations generated in the first part. For the second part, we concentrate on computing quantities for which we have demonstrated in the past that the twisted mass formulation of lattice QCD provides a very suitable and often advantageous setup. These are: the axial charge $g_A$ and the $sigma$-term;the leading order hadronic contribution to the muon anomalousmagnetic moment; the $eta$ and $eta’$ meson masses; fundamentalparameters of QCD, i.e. the strong coupling constant andquark masses; pseudo scalar decay constants; chiral condensate. With thisset of observables, we will hence address and investigatefundamental properties of QCD in a fully realistic setup.
Resource awarded: 7 500 000 core-hours on Fermi (Cineca, Italy), and 17 500 000 core-hours on JUQUEEN (GAUSS@FZJ, Germany), and 5 000 000 on SuperMUC (GAUSS@LRZ, Germany)
HiResClim : High Resolution Climate Modelling
Project leader: Colin Jones, Swedish Meterological and Hydrological Institute (SMHI), Sweden
Collaborators: Laurent Terray, CERFACS, France | Sophie Valcke, CERFACS, France | Eric Maisonnave, CERFACS, France | Christophe Cassou, CERFACS, France | Klaus Wyser, Swedish Meterological and Hydrological Institute (SMHI), Sweden | Uwe Fladrich, Swedish Meterological and Hydrological Institute (SMHI), Sweden | Muhammad Asif, Catalan Institute of Climate Sciences, Spain | Domingo Manubens, Catalan Institute of Climate Sciences, Spain | Francisco Doblas-Reyes, Catalan Institute of Climate Sciences, Spain | Chandan Basu, Linkoping University, Sweden | Torgny Faxen, Linkoping University, Sweden | Wilco Hazeleger, Royal Netherlands Meteorological Institute (KNMI), The Netherlands | Richard Bintanja, Royal Netherlands Meteorological Institute (KNMI), The Netherlands | Camiel Severijns, Royal Netherlands Meteorological Institute (KNMI), The Netherlands
AbstractHiResClim aims to make major advances in the science of climate change modelling . This will be achieved by addressing the dual requirements of; increased climate model resolution and increased number of ensemble realizations of future climate conditions for a range of plausible socio-economic development pathways. Increased model resolution aims to deliver a significant improvement in our ability to simulate key modes of climate and weather variability and thereby provide reliable estimates of future changes in this variability. A large ensemble approach acknowledges the inherent uncertainty in estimating long-term changes in climate, particularly in phenomena that are highly variable and, of which, changes in the occurrence of the rare but intense events are those impacting society and nature most strongly. To provide credible risk assessment statistics on future change in phenomena such as; extra-tropical and tropical cyclones, heatwaves, droughts and flood events, the combination of high climate model resolution and a large ensemble approach is unavoidable. In HiResClim we attack both of these requirements in a balanced approach, which, as well as being the most efficient way to utilise the most advanced HPC systems of today, is also the only path to providing more robust and actionable estimates of future climate change.
Resource awarded: 38 000 000 core-hours on MareNostrum (BSC, Spain)
Faraday – Numerical study of droplet wave interactions in the extended Faradayexperiment
Project leader: Damir Juric, LIMSI-CNRS, France
Collaborators: Jalel Chergui, LIMSI-CNRS, France | Seungwon Shin, Hongik University, Republic of Korea | Laurette Tuckerman, PMMH-ESPCI, France
AbstractIn 1831 Faraday conducted a simple experiment which consisted of shaking vertically a fluid filled container thereby inducing oscillations of the fluid surface. Beyond a certain threshold, the interface can form many kinds of standing-wave patterns, including crystalline patterns and others which are more complex. Faraday waves are an archetypical phenomenon endowed with a great fundamental interest for understanding the natural formation of patterns. There has been a great deal of theoretical work concerning the Faraday instability, but this has been necessarily limited. Since a nonlinear hydrodynamic code had not been available, even basic questions such as the conditions under which the instability is subcritical or supercritical have not yet received a definitive answer. Numerical simulation provides detailed information about the interface position and velocity field, as well as perfect control of parameters, much like a perfect experiment. A less widely exploited advantage of numerical simulations is that they can go beyond experiment: initial conditions can be precisely specified; symmetries such as two-dimensionality or hexagons can be imposed, perturbations can be rendered formally infinitesimal, unstable states can be computed. Three-dimensional numerical simulations of the full flow field in Faraday waves have only very recently been achieved by our group D. Juric, J. Chergui of LIMSI-CNRS, in collaboration with L. Tuckerman (PMMH-ESPCI, Paris) and N. Périnet (UOIT, Canada) (Périnet et al., JFM, 2009). The motivation for this research project is experiments conducted recently which produced fascinating phenomena such as quasi-crystals, superlattices and oscillons and launched a renaissance in the theoretical study of the extended Faraday experiment. The proposed research project would extend our work by using a newly developed, parallel, highperformance, free-surface code which significantly expands our available simulation capabilities to large spatial domains as well as complex, large-amplitude dynamics (including topology change) of both the liquid surface and drop. The code called, BLUE, was developed in a collaboration between D. Juric and J. Chergui in our laboratory, LIMSI-CNRS and S. Shin of Hongik University, South Korea. BLUE is a high performance, parallel code, that amalgamates our latest high fidelity Front Tracking algorithms for Lagrangian tracking of arbitrarily deformable phase interfaces including breakup and coalescence and a precise treatment of surface tension forces, interface advection and mass conservation (Shin et al. JCP, 2011, Shin and Juric, IJNMF 2009, JMST 2007, JCP 2002.) BLUE is to our knowledge the first implementation of Front Tracking on large-scale parallel architectures and has been successfully run on up to 8192 processors on the IBM BlueGene machine at the CNRS IDRIS computing center in Orsay, France with excellent scalability performance. Here our focus will be on recent experiments which extend the classical view of the Faraday experiment to newly observed, more exotic scenarios. Rajchenbach et al (PRL 2011) show five-petalled patterns and localized standing solitary waves (oscillons) of odd and even symmetries. The even pattern resembles oscillons originally recognized at the surface of a vertically vibrated layer of brass beads (Umbanhowar et al, Nature 1996). The odd pattern has not been previously observed in any media. BLUE will be used tosimulate and study these regimes, as well as superlattice patterns that occur when the imposed oscillationcontains two frequencies (Arbell and Fineberg, Phys Rev E, 2002).Recently a set of remarkable Faraday experiments (Couder et al, Nature 2005; Protiere et al, JFM 2006)exhibit bouncing, walking and orbiting behavior of drops on a vibrated liquid surface. The project willexplore the feasibility of using the new code to numerically simulate droplet-wave dynamics. If successful,this numerical study would complement recent experimental investigations and phenomenologicalmodeling of the droplet Â– wave field interactions (Eddi et al, JFM 2011) over long times to attempt to understand the localized states, self-organization, and orbit quantization for the rotating Faradayexperiment (Fort et al, PNAS 2010) and chaotic trajectories.
Resource awarded: 10 000 000 core-hours on JUQUEEN (GAUSS@FZJ, Germany)
MULTIDYN – Non-adiabatic molecular dynamics with explicitly treated electronicdegrees of freedom. A case study on rare-gas cluster cations.
Project leader: Rene Kalus, VSB – Technical University of Ostrava, Czech Republic
Collaborators: Martin Stachon, VSB – Technical University of Ostrava, Czech Republic | Ivan Janecek, Institute of Geonics AS CR, v.v.i., Czech Republic | Ales Vitek, VSB – Technical University of Ostrava, Czech Republic
AbstractThe main focus of the present project is on non-adiabatic fragmentation dynamics of ionized rare-gas clusters, RgN+ (Rg = Ar, Kr and Xe), either photoexcited or produced by electron-impact from neutral precursors. For the first time, the full range of cluster sizes for which detailed experimental data are available will be treated numerically, which has not been pos
sible before due to computational costs limitations. The main focus will be on a) how the initial electronic excitation is dissipated into nuclear degrees of freedom (internal conversion) with a special emphasis on the role of relativistic effects (spinorbit coupling) and quantum decoherence processes, and b) the production of metastable, electronically excited intermediates and their long-time decay. Parallelized version of a code for non-adiabatic molecular dynamics simulations developed in our group (see http://moldyn.vsb.cz/multidyn) will be used in step (a) whereas post-processing tools including long-time radiative and non-radiative decay processes treated in terms of first-order kinetic theory, also developed in our group [EPL 98 (2012) 33001], will be applied to step (b). Further, luminescence spectra of the metastable intermediates will be calculated from the data produced within the present project as a possibly direct experimental way for detecting them as well as a tool for the characterization of electronically excited potential energy surfaces in ionic rare-gas clusters. The results of the present project should shed light on processes taking place in much more complex systems for which such a detailed analysis of non-adiabatic dynamical processes with many electronic states involved is impractical even if the most powerful computers are used.
Resource awarded: 16 000 000 core-hours on Hermit (GAUSS@HLRS, Germany)
RBTC – Towards ultimate Rayleigh-Benard and Taylor-Couette turbulence
Project leader: Detlef Lohse, University of Twente, Th eNetherlands
Collaborators: Roberto Verzicco, Univ. of Tor Vergata, Italy | Rodolfo Ostilla, University of Twente, The Netherlands | Erwin van der Poel, University of Twente, Netherlands | Richard Stevens, University of Twente, The Netherlands
AbstractTurbulent flow is abundant in nature and technology. In contrast to a decade-old paradigm, even highly turbulent flow is strongly influenced by the boundaries. There is increasing evidence that there are different turbulent states, with sharp transitions in between them. The strength of turbulence is characterized by the Reynolds number Re, which gives the ratio between inertial and viscous forces. Reynolds numbers in typical process technology applications are of the order of 1e8. In the atmosphere Re=1e10 is achieved, and in an open ocean one has Re=1e11 and in stars even much higher values. We will focus our study on two paradigmatic systems in fluid dynamics, namely Rayleigh-Benard (RB) convection [1-2] (the flow in a closed box heated from below and cooled from above) and Taylor-Couette (TC) turbulence  (the flow in between two independently rotating coaxial cylinders). The reasons why these systems are so popular are: (i) These systems are mathematically well-defined by the (extended) Navier-Stokes equations with their respective boundary conditions; (ii) for these closed system exact global balance relations between the respective driving and the dissipation can be derived; and (iii) they are experimentally accessible with high precision, thanks to the simple geometries and high symmetries. An addition benefit is that within our collaboration we have access to state of the art experimental RB [4-7] and TC setups [8-14], which will allow us to compare the simulation results with experimental findings. In laboratory experiments and direct numerical simulations it is impossible to achieve the high Re numbers mentioned above. So one has to somehow extrapolate the experimental and numerical results at much lower Reynolds numbers to these high values. However, such an extrapolation becomes meaningless once there is a transition from one turbulent state at lower Re to another turbulent state at higher Re, or once different turbulent states coexist at the same Re. However, recent high Reynolds number RB and TC experiments strongly suggest that there are different turbulent states in this high Reynolds number regime [8-27]. However, it remains elusive why one or the other state is realized, and it is speculated that the existence of multiple turbulent states may be the origin of the observed behavior. The difference between the different turbulent states can be huge, for example in RB convection the difference in heat transport between the two turbulent states is a factor of 3 in the Ra number range that is relevant for various geophysical, astrophysical, and process-technological situations a better prediction of the heat transfer is necessary. Therefore, the objective of this project is to explore when there are different states of turbulence and how transitions between these different states occur. What determines in what state the turbulent flow is? What are the roles of the boundary layers and how do boundary layers and bulk flow interact? What are the most appropriate observables to characterize the different turbulent states? And finally: Can one trigger such a transition?
Resource awarded: 11 100 000 core-hours on Curie TN (GENCI@CEA, France), and 7 500 000 core-hours on Hermit (GAUSS@HLRS, Germany)
Frontiers of strong interactions
Project leader: Maria Paola Lombardo, Istituto Nazionale di Fisica Nucleare, Italy
Collaborators: Elisabetta Pallante, University of Groningen, The Netherlands | Albert Deuzeman, University of Bern, Switzerland | Kohtaroh Miura, Istituto Nazionale di Fisica Nucleare, Italy
Abstract In a nutshell, this project addresses the questions ’Which are the phases of strong interactions? ’. The analysis of the phase diagram is a central issue in many subfields of physics, as it enriches our understanding of the involved fundamental processes. Moreover, in a particle physics context, phases and phase transitions are intimately related with the history of the Universe and the origin of mass. Strong interactions held together quarks and gluons inside protons and neutrons, which, in turn, are the building blocks of matter as we understand it now. A large fraction of their mass comes from a process associated with a particular phase transition -the chiral transition. Still mysterious, and at the hearth of the current investigations in fundamental physics and LHC experiments, is the origin of the residual (electroweak) mass of the particles, the one which is not accounted for by chiral symmetry breaking of QCD. At zero temperature, the fascinating possibility of a so called quantum phase transition has been put forward: when increasing the number of fermion species we might enter the conformal ’world’ – or, more technically, the conformal window. This is a phase of the system where the concept of scale looses its meaning. Preceding this regime, an exotic pre-conformal dynamics might serve as a paradigm for modeling the generation of the electroweak mass – one of the most fascinating challenges of contemporary theoretical physics. In such pre-conformal region the coupling should vary little with the scale — a precursor phenomena of the scaling invariance realized within the conformal window— at a variance with QCD where the coupling changes dramatically (’runs’) with the scale. It is precisely this ’walking ’ feature that attracts the interest of model builders. On the lattice, we address questions like, does su
ch a (near) conformal theory exist in Nature? If yes what are its other characteristics, spectrum, anomalous dimension(s), dynamics, besides the essential features of a walking coupling? How does a technicolor world look like? In summary, we propose an extensive numerical study aimed at fully clarifying the phase diagram of strong interactions and flavor space. This project builds on our previous experience, it uses lattice QCD methodology simulations and the interdisciplinary language of phase transitions and critical phenomena. We have already contributed to these studies by proposing our own approach, based on a detailed analysis of the phase diagram in the multidimensional coupling space spanned by the bare lattice coupling, the bare mass, the temperature. We have studied the thermal behaviour of the near-conformal window, and obtained a bound on the lower edge of the conformal window. It is now necessary to sharpen this result, making more precise and reliable the constraints on the conformal window, mapping out the phase diagram in the Nf, coupling, temperature, mass space quantitatively, and completing a thorough analysis of the physical observables in each phase. To realize this program, we need to perform high statistics simulations on lattices large enough to keep under control the sizable finite size effects, and masses small enough to describe reliably the chiral limit. While exploratory studies have been published and are still ongoing on smaller size Tier 1 systems, we need the capability of a Tier0 system to make substantial, qualitative advances in this program.
Resource awarded: 17 000 000 core-hours on FERMI (Cineca, Italy)
LSAIL- Large Scale Acceleration of Ions by Lasers
Project leader: Andrea Macchi, Istituto Nazionale di Ottica, Consiglio Nazionale delle Ricerche (CNR/INO), Italy
Collaborators: Andrea Sgattoni, Politecnico di Milano, Italy | Matteo Passoni, Politecnico di Milano, Italy | Tatyana Liseykina, University of Rostock, Germany | Stefano Sinigardi, University of Bologna, Italy | Pasquale Londrillo, Isituto Nazionale di Fisica Nucleare (INFN), Italy
AbstractThe project aims to simulate advanced schemes of ion acceleration by superintense laser pulses over the large spatial and temporal scales needed to demonstrate energy gain beyond 100 MeV and up to the GeV frontier. The investigated schemes will include shock acceleration, radiation pressure acceleration, and sheath acceleration in low-density targets. The results will give directions to experiments and application developments of laser-plasma ion accelerators, and will be also relevant to the modeling of astrophysical processes of particle acceleration in scaled-down laboratory experiments.
Resource awarded: 10 000 000 core-hours on FERMI (Cineca, Italy)
SPRUCE – Seasonal Prediction with a high ResolUtion Climate modEl
Project leader: Eric Maisonnave, CERFACS, France
Collaborators: Michel Déqué, Météo-France, France | Jean-Philippe Piédelièvre, Météo-France, France | Jean-François Guérémy, Météo-France, France | Sophie Valcke, CERFACS, France | Christophe Cassou, CERFACS, France | Laurent Terray, CERFACS, France | Laure Coquart, CERFACS, France | Nicolas Ferry, MERCATOR OCEAN, France
Abstract SPRUCE aim is to attempt to improve our capacity to predict climate variations six months ahead, by combining our best present tools and data with a high performance computation capacity which is not yet available in our production machines but is offered by PRACE Tier0 platforms. The two modelling aspects we aim to improve in SPRUCE are thus horizontal resolution and ensemble size. Climate models, which simulate the joint evolution of the global atmosphere and ocean beyond the limit of deterministic atmospheric predictability (one to two weeks), have been developed since the 1990’s, driven for a large part by increasing offer of the numerical computation hardware. They have been used to predict the statistical properties of the atmosphere few months ahead. Although substantial progress has been made in the past, the current performance of climate models at seasonal to decadal scale is still not sufficient to meet the expectations and needs of the various stakeholders at European, regional and local levels. Nevertheless, reliable seasonal-to-decadal climate predictions are of strong potential value, since society and key economic sectors (energy, agriculture, …) have to base their short and medium term planning and decisions on robust climate information and the associated environmental and socioeconomic impacts. Horizontal resolution has always been one of the major limiting factor in climate modelling. At coarser resolution than 0.5°, the mountain pattern is unrealistic and lower atmosphere winds may have a wrong direction on average. In the ocean, high resolution is required to represent the eddies which transport heat from equator to poles. Many climate studies have shown the benefits of increasing horizontal resolution on the mean simulated climate and its variability. The standard CNRM-CMIP5 Météo-France model uses a 1.6° resolution for the atmosphere and 1° for the ocean. SPRUCE proposes to increase this resolution to respectively 0.5° and 0.25°, to bring a significant jump in the seasonal predictability. The second aspect of the improvement in SPRUCE is the ensemble size. Due to the chaotic nature of the atmosphere at monthly to seasonal scale, a seasonal prediction is necessarily probabilistic. A single realization of the forthcoming months has little chance, even on time average, to resemble the observed behaviour. In the mid latitude, very recent results on northern mid latitude winter predictability suggest that increasing the ensemble size to 60 leads to a significant improvement. A size of 120 is necessary at SPRUCE model resolution. The predictability evaluation is based on a series of re-forecasts, or hindcasts, covering the past years, starting from the 0.25° ocean reanalysis GLORYS, during the 1993-2009 period. The exploitation of the results will concern first the winter mid-latitude regimes (e.g. NAO) and local predictability of temperature over Europe. We will then examine the predictability of summer heat waves over Europe and North America, with a focus on 2003 summer. Our results will contribute to define a strategy for operational seasonal forecasting in Europe.
Resource awarded: 27 000 000 core-hours on Curie TN (GENCI@CEA, France)
PULSATION: Peta scale mULti-gridS ocean-ATmosphere coupled simulatIONs
Project leader: Sebastien Masson, Pierre and Marie Curie University, France
Collaborators: Eric Maisonnave, CERFACS, France | Rachid Benshila, CNRS, France | Christophe Hourdin, CNRS, France | Sarah Berthet, CNRS, France | Nicolas Vigaud, CNRS, France | Francois Colas, IRD, France | Marie-Alice Foujols, IPSL, France | Arnaud Caubel, CEA, France | Yann Meurdesoif, CEA, France | Cyril Mazauric, Bull, France | James Done, NCAR, United States | Ming Ge, NCAR, United States
Abstract Climate modelling has become one of the major technical and scientific challenges of the century. One of the major caveats of climate simulations, which consist of coupling global ocean and atmospheric models, is the limitation in spatial resolution ( 100 km) imposed by the high computing cost. This constraint greatly limits the realism of the physical processes parameterized in the model. Small-scale processes can indeed play a key role in the variability of the climate at the global scale through the intrinsic nonlinearity of the system and the positive feedbacks associated with the ocean-atmosphere interactions. It is then essential to identify and quantify these mechanisms, referred here as Â”upscalingÂ” processes, by which smallscale localized errors have a knock-on effect onto global climate. PRACE supercomputers represent an inestimable opportunity to reduce recurrent biases and limit uncertainties in climate simulations and long-term climate change projection. We propose to take up this scientific challenge in this project. However, instead of choosing the crude solution of a massive increase of the models resolution, we plan to explore new pathways toward a better representation of the multiscale physics that drive climate variability and therefore to limit the use of the highest resolutions in limited areas. Our efforts will concentrate on key coastal areas, which hold the models strongest biases in the Tropics at local but also at basin scales and have great societal impacts through their repercussions on the Monsoon or El Niño. Our approach aims at benefiting from the best of the global and regional modelling approaches with the creation of the first multi-scale ocean-atmosphere coupled modeling platform. Our goal is to introduce embedded high–resolution oceanic and atmospheric zooms in key regions of a global climate model. This strategy, based on a 2-way nesting procedure, allows us to represent major fine-scale oceanic and atmospheric dynamical processes in crucial areas and their feedbacks on the climate at global scale. To attain this goal, we will combine state-of-the-art and popular models: NEMO for the ocean, WRF for the atmosphere and OASIS-MCT for the coupler. WRF and NEMO are among the very few models able to combine high-resolution simulations at global scale and 2-way embedded zooms functionality. Our methodology consists to compare the climate mean sate, the intra-seasonal and seasonal variability and biases in a set of coupled experiments with horizontal resolutions ranging from 27km (which corresponds roughly the maximum existing today in climate models) to 9km, first in zooms and finally at global scale. Each step will allow us to isolate the processes we want to study and quantify the improvement of the model as its resolution increases. During the second year, we project to reach the resolution of 3km in the embedded zooms in order to explicitly resolve the cloud convection. The completion of this first multi-scale ocean-atmosphere coupled modeling platform will allow us to explore the impact of the highest spatial resolution not yet approachable by the current climate models. It offers a unique opportunity to significantly improve the next generation of climate simulations.
Resource awarded: 22 500 000 core-hours on Curie TN (GENCI@CEA, France)
STAF-Simulation of Turbulent and very Anisothermal Flow
Project leader: Benoit Mathieu, CEA, France
Collaborators: Adrien Toutant, PROMES-CNRS, France | Francoise Bataille, PROMES-CNRS, France | Gauthier Fauchet, CEA, France
AbstractUnderstanding the nature of complex turbulent flows remains one of the outstanding questions in classical physics. Direct Numerical Simulation (DNS) is a very useful approach to investigate turbulence. Wall bounded flows have been studied by many researchers using DNS. At the PROMES laboratory (www.promes.cnrs.fr), we study the effect of very strong temperature gradients on the turbulence of wall bounded flows. This study is motivated by the flow characteristics inside solar receiver of concentrated solar power tower plants. As an example, PEGASE (Production of Electricity from GAs turbine and Solar Energy) is a technology of solar plant that uses pressurized air at very high temperature (www.promes.cnrs.fr/pegase). In the situation of high temperature gradients, the interaction between energy and momentum equations are strong and classical models are not valid anymore. The temperature gradient can be considered as a strong external agency that modifies the turbulence properties. In the case of supersonic compressible flow, the coupling between turbulence and high temperature gradients has been studied extensively to increase the understanding of the turbulent boundary layer mechanism. In the case of low speed flow, only very few studies are dedicated to this coupling. In particular, there is no reference data that concerns subsonic flow without low Reynolds number effect and with dilatational effect due to strong thermal gradient. This case is missing in the literature. Trio_U is a general purpose CFD platform developed at CEA (French Energy Agency) and designed for HPC computations on structured and unstructured grids (www-trio-u.cea.fr). Efficiency and scalability of the code has been proven on previous computations of this kind on Jade and on Titane with GENCI resource allocations (up to 4000 cores), and it has been tuned for the processors, network and IO architectures of the Curie computer. This computation will use a dedicated multi-grid algorithm to solve the Poisson equation of the pressure correction that must first be fine tuned for the chosen grid and physical parameters. The computational domain will have at least 4 times more grid cells than previous computations. Computations of that size (more than 1 billion of points) are impossible to run on the existing Tier-1 machines. Using this efficient tool to run DNS on PRACE computers, we will be able to study the thermal boundary layer in the case of dilatational flows. Those new large scale DNS can help to better understand the coupling between the dynamic and the thermal part. The computed DNS data are also vital to develop and validate the models of the subgrid scale stresses (for the velocity) and of the subgrid scale heat flux (for the temperature). The DNS data will therefore be made available to the wider scientific community.
Resource awarded: 6 000 000 core-hours on SuperMUC (GAUSS@LRZ, Germany)
Simulating the Epoch of Reionization for LOFAR
Project leader: Garrelt Mellema, Stockholm University, Sweden
Collaborators: Ilian Iliev, University of Sussex, United Kingdom | William Watson, University of Sussex, United Kingdom | Saleem Zaroubi, University of Groningen, Netherlands | Alexandros Papageorgiou, University of Groningen, Netherlands | Hannes Jensen, Stockholm University, Sweden | Kai-Yan Lee, Stockholm University, Sweden
AbstractReionization is believed to be the outcome of the release of ionizing radiation by early galaxies. Due to the complex nature of the reionization process it is best studied through numerical simulations. Such simulations present considerable challenges related
to the large dynamic range required and the necessity to perform fast and accurate radiative transfer calculations. The tiny galaxies which are the dominant contributors of ionizing radiation must be resolved in volumes large enough to derive their numbers and clustering properties correctly, as both of these strongly impact the corresponding observational signatures. The ionization fronts expanding from all these millions of galaxies into the surrounding neutral medium must then be tracked with a 3D radiative transfer method which includes the solution of non-equilibrium chemical rate equations. The combination of these requirements makes this problem a formidable computational task. We propose to perform several simulations with the main goal to simulate, for the very first time the full, very large volume of the Epoch of Reionization (EoR) survey of the European radio interferometer array LOFAR, while at the same time including all essential types of ionizing sources, from normal galaxies to QSOs. The structure formation data will be provided by N-body simulation of early structure formation with 8192^3 (550 billion) particles and 500/h Mpc volume. This combination of large volume and high resolution will allow us to study the multi-scale reionization process, including effects which are either spatially very rare (e.g. luminous QSO sources, bright Lyman-alpha line-emitters) or for which the characteristic length scales are large (e.g. X-ray sources of photoionization and heating; the soft UV that radiatively pumps the 21-cm line by Lyman-alpha scattering; the H_2-dissociating UV background).This structure formation simulation will be used in the LOFAR Epoch of Reionization Key Science Project to construct a large library of reionization simulations on non-PRACE facilities and will be essential in the interpretation of the LOFAR observations. On Curie we will use the structure formation results to perform a reionization simulation which will address the likely stochastic nature of the sources of reionization, an aspect that to date has not been explored. We will also study the effects from the early rise of the inhomogeneous X-ray background. The forming early galaxies, and the stars and accreting black holes within them emit copious amounts of radiation in all spectral bands, which in turn affects future star and galaxy formation. There are multiple channels for such feedback which need to be taken into account, an important one of which are the subtle, but far-reaching effects of X-rays which strongly modulate the redshifted 21-cm emission and absorption signals at early times.
Resource awarded: 3 000 000 core-hours on Curie FN, and 19 000 000 core-hours on Curie TN (GENCI@CEA, France)
LSS-BULB – Large Scale Simulations of the olfactory BULB
Project leader: Michele Migliore, National Research Council, Italy
Collaborators: Michael Hines, Yale University, United States | Thomas McTavish, Yale University, United States | Yuguo Yu, Fudan University, China
AbstractUnderstanding how sensory inputs are elaborated, before they reach cortical areas for higher processing, is a fundamental step to advance our knowledge of the basic functions and dysfunctions of the nervous system. Most, if not all, of the mechanisms underlying cognitive and pathological brain functions are however poorly understood, since technical limitations impose major problems to adequately explore not only the detailed processes of brain functions but also the possible therapeutic approaches to many brain dysfunctions. Realistic models of neurons and networks, implemented as closely as possible to experimental data, can provide unique and important insights on the relevant mechanisms, suggesting experimentally testable predictions. There is an intensive effort worldwide to attack this kind of problems using ICT methods, including computational modeling. One particularly important example is the Human Brain Project (HBP), one of the 6 final EU flagships FET projects. The PI of this proposal is a member of the HBP Consortium, where he is responsible for the implementation of brain system models at the cellular level using realistic neurons. The project will be carried out in close collaboration with Dr. Michael Hines, at the Department of Neurobiology of Yale University (New Haven, CT, USA), who is another member of the HBP Consortium and responsible of implementing the cellular simulator. The big challenge that we would like to address within this project is to implement a relatively large realistic model of the bulb, in such a way to directly use the experimental data from functional MRI to drive the underlying network of mitral and granule cells, to demonstrate and predict the learning mechanisms that will ultimately be responsible for the early processing stages of sensory inputs. Up to now, most experimental and theoretical findings on odor recognition have been interpreted in terms of centersurround connectivity of the principal kinds of neuron population, i.e. mitral and granule cells, with highly activated cells inhibiting less activated neighbours. However, recent findings have shown that odors may activate spatially distant regions of the bulb with a sparse, columnar, connectivity between mitral and granule cells. This organization challenges the classical center-surround organization, and there is thus a need to figure out a new general paradigm for signal discrimination that could have far reaching influence for other brain regions. This problem cannot be easily studied experimentally. With our modelling approach, we aim to demonstrate whether and how odor identity and concentration can be represented in the olfactory bulb by a combination of temporal and spatial patterns, with both feedforward excitation and lateral inhibition via dendrodendric synapses as underlying mechanisms. To this purpose, we will implement a portion of the olfactory bulb using 500 mitral cells and 10000 granule cells, 1/100th of the entire system. This would be the first implementation of the olfactory bulb at this scale using realistic cell properties and network connectivity. We expect it to have a major impact on the field, promoting new experimental investigations and implementing a new framework to investigate the functions of a brain system.
Resource awarded: 12 000 000 core-hours on FERMI (Cineca, Italy)
SHAKEIT – Physics-based evaluation of seismic shaking in Norther Italy
Project leader: Andrea Morelli, Istituto Nazionale di Geofisica e Vulcanologia, Italy
Collaborators: Irene Molinari, Istituto Nazionale di Geofisica e Vulcanologia, Italy | Peter Danecek, Istituto Nazionale di Geofisica e Vulcanologia, Italy | Piero Basini, University of Toronto, Canada
AbstractSimulation of seismic wave propagation in realistic crustal structures is a fundamental tool to evaluate the earthquake-generated ground shaking in specific regions, for estimates of seismic hazard. Currentgeneration numerical codes, and HPC infrastructures, now allow for truly realistic simulations in complex 3D geologic structures. We plan to apply such methodology to the Po Plain in Northern Italy — a region with relatively rare earthquakes but having large property and industrial exposure, as it became clear during the very recent events of May 20-29, 2012. Our goal is then to produce estimates of expected ground shaking in Norther
n Italy through detailed deterministic simulations of ground motion due to expected earthquakes. This approach is known as simulating seismic scenarios. Realistic calculation of seismic ground motions in local geologic structures are now made possible by recent developments in computational seismology. Spectral element solvers are particularly attractive due to the exact representation of free surface and accurate solutions for surface waves. The SPECFEM3D codes implements this method in a very efficient, highly-scalable, way and have become an important tool for the seismological community. We plan to improve our working 3D earth model, and then run earthquake simulations to assess shaking scenarios connected to plausible earthquakes. A revised and improved three-dimensional model of the earth’s crust will use available geo-statistical tools to merge the excellent information existing in the form ofseismic reflection profiles that have been shot in the ’70s and the ’80s for hydrocarbon search. Suchinformation, that has been used by geologists to infer the deep structural setup, has never been employedto build a true 3D model to be used for seismological simulations. The first stage of the massivecomputational effort will involve a number of simulations of waves produced by recent earthquakes forwhich seismographic records are available. Through comparison between simulated and recordedseismograms this model validation and tuning stage will allow to adjust some summary parameters —such as extension of sedimentary basins, velocity profiles in sediments, attenuation model — to providebetter fit. This model ’tuning’ stage will differ from classical gradient-based linearized inversions, that resultin smoothed models even when done fitting the full waveform. We wish instead to honor the sharpdiscontinuities, known from high-resolution seismic studies, that impact on seismic wave properties at alocal scale. Given the impossibility to perform a true Monte Carlo inversion, we will instead explore areduced parameter space with repeated forward simulations. Once the starting model has been finelytuned, a second computational stage will consists of generation of shaking scenarios for plausibleearthquakes. This includes known historical events, whose source parameter uncertainty we will sample.We plan to build a sample of 500 possible source models for computing deterministic seismic scenariosand calculating hazard parameters. This approach, first explored in the nuclear energy industry, hasparticular value where actual seismic records are scarce, because of limited seismic activity, but strongearthquakes may be expected.
Resource awarded: 53 400 000 core-hours on SuperMUC (GAUSS@LRZ, Germany)
X-VAMPA: Cross validation and assessment of numerical methodologies for the modeling of primary atomization
Project leader: Vincent Moureau, CORIA – CNRS UMR6614, France
Collaborators: Alain Berlemont, CORIA – CNRS UMR6614, France | Thibaut Ménard, CORIA – CNRS UMR6614, France |Guillaume Balarac, LEGI CNRS UMR 5519, France | Bénédicte Cuénot, CERFACS, France | Gabriel Staffelbach, CERFACS, France
AbstractThe X-VAMPA project is dedicated to the cross validation and assessment of numerical methodologies for the modeling of primary atomization. In atomizers, the bulk liquid undergoes very high shear rates that ultimately lead to the break-up and the formation of a spray of small droplets. The early stages of liquid break-up are the so-called primary atomization processes, which consist of the destabilization of the liquid stream and the formation of the first ligaments and droplets. Atomizers are widely used in automotive and aircraft engines because the resulting fuel droplets have a very large exchange area with the surrounding gas compared to the initial liquid stream so that evaporation and combustion of the liquid fuel are dramatically enhanced. The fluid mechanics of primary atomization is highly non-linear and its deep understanding requires both experimental and numerical studies. In order to make noticeable advances in this domain, several national and European projects have been funded recently as the FIRST FP7 project, which is partly dedicated to the improvement and the validation of numerical methodologies for primary atomization. This PRACE proposal, which is led by three labs of the FIRST consortium namely CORIA, LEGI and CERFACS, follows the same goal. The common objective is to provide cross validations of two different numerical strategies for the modeling of two types of liquid atomizers namely the pressure and air-blast atomizers. The simulation of primary atomization is highly challenging because the material properties of the liquid and the gas are very different. As a result, the interface between the liquid and the gas may be considered numerically as a discontinuity. Moreover, most of the liquids exhibit surface tension forces, which play a crucial role in the ligament and droplet formation, and which have to be modeled accurately. Beyond these modeling aspects, primary atomization involves a wide range of scales from a few millimeters, which is the typical size of the injector, to a few microns for the droplets. As a consequence, atomization simulations require large meshes with very high resolutions, and these calculations can only be carried out with massively parallel computers. Very few numerical tools have the capability to model primary atomization because of the numerical challenges mentioned above. However, two Computational Fluid Dynamics tools are developed at CORIA, which target the modeling of atomization. The first is the finite-volume solver YALES2 and the second is the finite-difference ARCHER code. Both codes feature advanced front tracking techniques for the simulation of atomization and have shown their capability to run efficiently on large super-computers. In order to provide a complete cross validation of the numerical tools, the proposal has been organized in three work packages. WP1 is led by CORIA and is dedicated to the validation of the tools for pressure atomizers. WP2 is led by CORIA and LEGI and is focused on academic air-blast atomizers. WP3 is led by CORIA and CERFACS, and targets realistic air-blast swirl atomizers.
Resource awarded: 11 000 000 core-hours on Curie TN (GENCI@CEA, France)
SN2SNR – Filling the gap between supernova explosions and their remnants: theCassiopeia A laboratory
Project leader: Salvatore Orlando, Istituto Nazionale di Astrofisica, Italy
Collaborators: Fabio Reale, University of Palermo, Italy | Giovanni Peres, University of Palermo, Italy | Marco Miceli, University of Palermo, Italy | Fabrizio Bocchino, Istituto Nazionale di Astrofisica, Italy | Maria Letizia Pumo, University of Padua, Italy
AbstractSupernova remnats (SNRs) show a complex morphology characterized by a complex spatial distribution of ejecta, believed to reflect pristine structures and features of the progenitor supernova (SN) explosion. Filling the gap between SN explosions and their remnants is very important in Astrophysics for a comprehension of the origin of present-day structure of ejecta in SNRs and to probe and constraint current models of SN explosions. A detailed model connecting the SN explosion with the SNR evolution is presently missing. The aim of this project is to study the ejecta
dynamics from the immediate aftermath of the SN explosion to their expansion in the SNR with unprecedented model resolution and completeness to answer, for the first time, important questions as: how does the final remnant morphology reflects the characteristics of the ejecta formed in the aftermath of the SN explosion? The results of the proposed simulation will produce high-impact scientific publications, contributing to fill the gap between SN explosions and their remnants. Since the SNR Cassiopeia A (Cas A) is an attractive laboratory for studying the SNe-SNRs connection (being one of the best studied SNRs for which its 3D structure is known), the model describes its evolution with complete and realistic conditions of the initial ejecta structure. The plasma and magnetic field evolution is described by solving the full 3D MHD plasma equations. The finer ejecta features are described with unprecedented spatial resolution (down to 1.2e15 cm) at a level of detail not feasible before. Our project is based on a single large-scale 3D MHD simulation. The initial ejecta structure will be derived by a model of the “early” post-explosion evolution (from the breakout of the shock wave at the stellar surface up to the so-called nebular stage) of core-collapse SNe, taking into account the constraints on the spatial distribution of ejecta derived from observations. The geometric domain is a box with a non-uniform mesh 1536x768x768. The initial remnant is modelled as a sphere centered on the origin of the Cartesian coordinate system with radius R = 0.1 pc (corresponding to an initial age of 2 yr). We follow the SNR evolution for 300 yr (namely the age of Cas A). We use the PLUTO 3D MHD code.
Resource awarded: 7 700 000 core-hours on MareNostrum (BSC, Spain)
The Folding and Functional Binding Landscape of a Protein Molten-Globule
Project leader: Modesto Orozco, IRB Barcelona, Spain
Collaborators: Athi N. Naganathan, IIT Madras, India | Michela Candotti, IRB Barcelona, Spain
AbstractFlexible proteins as intrinsically disordered proteins (IDPs) are known to play critical roles in many cellular processes, challenging the long-standing dogma of a well-defined structure contributing to a specific function. However these systems have been difficult to characterize due to their innately dynamic nature. A practical strategy to model IDPs is to study molten-globules (MGs) proteins. Indeed they can be thought as an extreme example of IDPs that possess significant secondary structure and few long-range interactions. Therefore we propose to explore in silico the folding and binding behavior of the MG protein NCBD (nuclear co-activator binding domain) using large-scale replica exchange molecular-dynamics simulations, a powerful technique used to enhance conformational sampling.
Our goal is to answer and provide atomic-level evidence to a variety of questions that include:i) the mechanism of folding, since it is still a matter of debate (downhill, two-state or multi-state).ii) the mechanism of binding (conformational selection or induced fit), exploiting NCBD remarkably promiscuous binding to different partners. This is particularly relevant in questioning the structure-function paradigm and the protein moonlighting.iii) the folding-speed limit and the degree of structure and dynamics as a function of temperature. The temperature dependence can shed light on the behaviour of MGs at high temperatures that is expected to resemble IDPs, providing a window of opportunity to characterize their conformational behaviour.iv) the folding/unfolding equilibrium in presence of a chemical denaturant as urea, in order to shed more light on its effect in driving protein unfolding. This is possible due to the low barrier thermodynamic of unfolding for NCBD.
Resource awarded: 33 800 000 core-hours on MareNostrum (BSC, Spain)
HOTSUN — High perfOrmance compuTing in Silicon nanostructrUres for thirdgeneratioN photovoltaics
Project leader: Stefano Ossicini, Università degli Studi di Modena e Reggio Emilia, Italy
Collaborators: Ivan Marri, Università degli Studi di Modena e Reggio Emilia, Italy | Marco Govoni, University of California, Davis, United States
AbstractAn important challenge of the scientific research is oriented in promoting the establishment of clean, cheap and renewable energy sources. The most appealing and promising technology is solar based, i.e. photovoltaics. For optimal energy conversion one important requirement is that the full energy of the photons is used. In solar cells, a single electron-hole (e-h) pair of specific energy is generated only when the incoming photon energy is above the energy gap of the system, with the excess energy being lost to heat. Efficiency bottlenecks induced by thermalization processes can, in case of high-energy excited carrier, be overcome by promoting Carrier Multiplication (CM). Such effect, that consist in the generation of multiple electron-hole pairs by a single photon, can improve photovoltaic efficiency by producing additional photocurrent, limiting thus the heat generation resulting from phonon scattering. Effects induced by CM on the excited carried dynamics were observed in different nanostructured (PbSE and PbS [1-2], CdSe , PbTe , InAs  and Si ) materials. Moreover, thanks to the pioneering work of Semonin et al. , a relevant photocurrent enhancement arising from CM was proven in a PbSe based quantum dot solar cell. Recently, new CM dynamics were observed by Timmerman and Trinh [8-10] in a dense array of NCs. This effect, called space separated quantum cutting (SSQC), differs from the standard CM (one-site CM), because the generation of two e-h pairs after absorption of a single photon occurs in two different (space separated) NCs. CM via SSQC stems from the NC-NC interaction and represents one of the most suitable routes for solar cell loss factor minimization. Like for the one-site CM, a theoretical interpretation of the SSQC is, at today, totally missing. In this context, numerical ab-initio simulations represent a powerful tool and, with an accuracy that complements the experimental observations, offer the possibility to isolate single decay paths and quantify their relevance. In this project we aim to study, within a full ab-initio approach, CM effects induced in both isolated and interacting Si-NCs. One-site CM dynamics in single Si-NCs of different size will be investigated in order to quantify the role played by quantum confinement. This is a very important point not yet clarified both experimentally and theoretically. Again we will study for the first time, using a procedure never proposed before, effects induced on CM by NCs interplay. We will consider systems formed by two or three NCs placed in the same unit cell analyzing if and how NC-NC interaction can be used to improve solar cell performances. In this context a detailed analysis of SSQC events will be carry out. It is very important to note that, for first time, a theoretical framework is built to address this kind of mechanisms that, due to the complexity of the problem, have been never studied numerically. At the end, effect induced on CM by the present of defects will be analyzed.
Resource awarded: 10 457 078 core-hours on FERMI (Cineca, Italy)
: Global hybrid-Vlasov simulation for space weather
Project leader: Minna Palmroth, Finnish Meteorological Institute, Finland
Collaborators: Sebastianvon Alfthan, Finnish Meteorological Institute, Finland | Ilja Honkonen, Finnish Meteorological Institute, Finland | Yann Kempf, Finnish Meteorological Institute, Finland | Sanni Hoilijoki, Finnish Meteorological Institute, Finland | Dimitry Pokhotelov, Finnish Meteorological Institute, Finland | Arto Sandroos, Finnish Meteorological Institute, Finland | Hannu Koskinen, University of Helsinki, Finland
AbstractThe constant blow of solar wind builds the richest reachable plasma laboratory with spatial and temporal scales not attainable in terrestrial laboratories. Plasma phenomena within the near-Earth space (magnetosphere) create space weather, referring to harmful effects that can endanger technological systems or human life in space. Space weather predictions are mostly at an empirical stage, while future forecasts are based on numerical simulations of the coupled solar wind-magnetosphere-ionosphere system. Current large-scale space weather simulations are based on the simple magnetohydrodynamic (MHD) theory assuming that plasma is a fluid. Accurate characterization of the physics of space weather needs to be based on the plasma kinetic theory including multi-component plasmas. Finnish Meteorological Institute is developing a 6-dimensional Vlasov theory-based simulation called Vlasiator, in a Starting Grant project by the European Research Council. In Vlasiator, ions are distribution functions, while electrons are MHD fluid, enabling a self-consistent global plasma simulation that can describe multi-component and multi-temperature plasmas to resolve non-MHD processes that currently cannot be self-consistently described by the existing global plasma simulations. The novelty is that by modeling ions as distribution functions the outcome will be numerically noiseless, although sixdimensional, as the 3-dimensional ordinary space contains a 3-dimensional phase space for ions. Vlasiator includes advanced high-performance computing techniques available from load balancing to highly scalable grids to allow massively parallel computations. Local and global tests show that the simulation is physically and technically mature to be tested in a massively parallel setup. In this project, we focus on one of the main questions in space physics concerning energy circulation from the solar wind to the magnetosphere.We investigate1) the shocked plasmas surrounding the magnetosphere (the magnetosheath) aiming to provide a better description of the bow shock and foreshock region than from earlier simulation efforts and2) the processes enabling energy and mass transfer to the magnetosphere (reconnection).
1) We reproduce the detailed global structures and the time dependence of the magnetosheath in 3-dimensional phase space. Information derived from the full distribution function and its moments have never been obtained from a noiseless self-consistent simulation, and therefore we will be able to give new invaluable evidence in interpreting magnetosheath waves and particle acceleration, shock structure, and feedbacks to the magnetosphere.2) We investigate the magnetosheath and magnetopause as influenced by reconnection between the solar wind and terrestrial magnetic fields, and quantitatively assess the influence of ion kinetics to the dynamics of reconnection. With the first self-consistent approach including ion kinetics, we are able to investigate consequences of reconnection in a global setup including realistic solar wind boundary conditions. The recent major interest towards space weather is manifested as a race towards world’s first accurate space weather model. Vlasiator is a unique code with recently enabled capacity to investigate the proposed issues. The physics will address the most critical topics in space weather. The proposing team is one of the leading space simulation groups in the world and given the chance will be able to answer the challenge.
Resource awarded: 30 000 000 core-hours on Hermit (GAUSS@HLRS, Germany)
MHILQCD – Multi-hadron interactions in Lattice QCD
Project leader: Assumpta Parreño, University of Barcelona, Spain
Collaborators: Emmanuel Chang, University of Barcelona, Spain | Martin John Savage, University of Washington, United States | Huey-Wen Lin, University of Washington, United States | Saul D. Cohen, University of Washington, United States | Silas Beane, University of New Hampshire, United States | Parikshit Junnarkar, University of New Hampshire, United States | William Detmold, College of William and Mary / Jefferson Laboratory, United States | Kostas Orginos, College of William and Mary / Jefferson Laboratory, United States | Thomas Luu, Lawrence Livermore National Laboratory, United States | André Walker-Loud, Lawrence Berkeley National Lab, Unites States
AbstractA central goal of Nuclear Physics is to obtain a first principles description of the properties and interactionsof nuclei from the underlying theory of the strong interaction, Quantum Chromodynamics (QCD). Being thetheory which governs the interactions between the basic building blocks of matter, quarks and gluons, it isalso responsible for all the states of matter in the Universe, confining those primary pieces into hadronicstates (nucleons, pions) at low energies, binding neutrons and protons through the nuclear force to givethe different elements in the periodic table, etc. Nevertheless, due to the large complexity of the quarkgluondynamics, one cannot obtain analytical solutions of QCD in the energy regime relevant for nuclearphysics. At short-distances (high energies), the coupling between the quarks and gluons, and between thegluons themselves is small, and allows for processes to be computed as an asymptotic series in the strongcoupling constant. However, as the typical length scale of the process grows, the QCD coupling becomeslarge, and perturbative calculations fail to converge. The only known way to solve QCD in this regime isnumerically, using Lattice QCD (LQCD). LQCD is a non-perturbative formulation of QCD in which aspecific volume of Euclidean space-time is discretized to allow for numerical evaluation of the path integral.Quarks reside on the sites of the space-time lattice while gluon fields, carriers of the strong force, act aslinks between sites. Monte-Carlo techniques are used to perform the integrals over the fields that definethe theory. The various approximations that are made in LQCD for computational necessity aresystematically improvable, and effective field theories are constructed to obtain physical results throughvolume, continuum and quark-mass extrapolations.While there is a rich phenomenology describing hadronic interactions involving nucleons and pions,favored by the wealth of experimental data in this sector, a complete knowledge of the interaction on morefundamental grounds is still missing. Moreover, extending this wisdom to the strange sector is difficult, dueto the short lifetime of strange hadrons (unstable against the weak interaction), which precludes aquantitative extraction of their interactions from experiments. This fact is a serious drawback in a set oftheoretical studies as, for example, the determination of the equation of state of hadronic matter in highdensityenvironments – e.g. neutron stars – where the main uncertainty comes from the (poorly constrained)interaction among baryons and mesons in the strange sector.Our main objective is to calculate the properties of interacting hadrons – collections of nucleons, mesonsand hyperons defining the core of nuclear physics – using LQCD. In order to significantly contribute to afield that is characterized by energy-scales that are in the MeV range, high precision calculations have tobe performed. Within the present project, we request 60 Million core-hours on the CURIE thin nodes, andstorage 210TB (work,scratch and archive), to generate and compute propagators on two ensembles of Nf=2+1 isotropic clover gauge-field configurations, with a volume of 48^3 × 96, a lattice spacing of 0.11 fmand at mpi 430 and 300 MeV, with 10,000 trajectories each. Contractions will be performed to constructa range of light (exotic) nuclei from QCD. Combining the proposed measurements with calculationsobtained with other resources within the collaboration, and using different volumes and light-quark masses,will allow us to perform the infinite volume and physical quark-mass extrapolations. With this study, weintend to understand the quark-mass dependence and volume effects in light (hyper-) nuclei, while with asecond year access, we will pursue calculations at smaller lattice spacings in order to carry out continuumextrapolations.
Resource awarded: 27 100 000 core-hours on MareNostrum (BSC, Spain)
Linear-scaling ab initio study of surface defects in metal oxide and carbon nanostructures
Project leader: Rubén Pérez, Universidad Autónoma de Madrid, Spain
Collaborators: Milica Todorovic, Universidad Autónoma de Madrid, Spain | Pablo Pou, Universidad Autónoma de Madrid, Spain
AbstractIn spite of their crucial role in many key surface processes like film growth and catalysis, there is still a very limited understanding of the structure and electronic properties of both point defects and large scale non-periodic features like kinks, steps, surface domains and domain boundaries. Scanning probe microscopes (SPM), including the scanning tunneling microscope (STM) and the atomic force microscope (AFM) provide real-space, local information with atomic-scale resolution, but this experimental evidence has to be combined with ab initio simulations in order to identify the defect and extract quantitative information about its properties. This is a challenging task because such calculations require sizeable unit cells with extensive surface areas, and the standard computational methods based on density functional theory (DFT) scale poorly with increasing model system size. In this project, we propose to apply an efficient linear-scaling DFT method, implemented in the OpenMX code, to simulate large-scale surface features of technologically-relevant materials such as ultrathin metal oxide layers and epitaxial graphene supported on metal substrates, with direct comparison to experimental SPM data readily available. The surface oxide monolayer formed when Cu(100) surfaces are exposed to oxygen (Cu(100)-O) is not only a very good model catalyst, but features strikingly different structural arrangements for Cu and O atoms which makes it ideally suited for investigations into structure and chemical response of surface defects. Our calculations combine DFT total-energy calculations with Non-equilibrium Green’s Function (NEGF) methods for electronic transport to determine the tip-surface interaction and tunneling current for a large set of tip models. These results can be directly compared with the forces and currents measured simultaneously in order to characterize the defect and determine the nature of the imaging tip. Our ultimate goal is to understand the role of defects in nucleating surface domains on Cu(100)-O. Defects are known to locally modify surface reactivity and to be important reactive sites, so we aim to explore the chemical response of domain boundaries and determine catalytically-active sites. Graphene is among the most promising materials for nanotechnology due to its unique mechanical and electronic properties. In order to transform these expectations into real devices it is necessary to tune locally the grapheme electronic structure through the introduction of defects, coupling to metals, or the controlled growth of nanoribbons. We plan to study graphene vacancies, predicted to induce very localized states prone to magnetic instabilities, in both pristine and metal-supported graphene. Large simulation unit cells are needed in order to minimize the elastic and electronic vacancy-vacancy interaction and to obtain the correct description of the metal surface states that are key to the graphene-metal coupling. Our study of the growth of graphene on metallic substrates is based on recent STM experiments mapping the structure of graphene close to steps where the flakes start to nucleate. Our simulations would help to understand the competition between the interaction of graphene with the step and with the Pt surface that controls the structure and chirality of the flake edge and the observed Moiré structures.
Resource awarded: 16 250 000 core-hours on Curie TN (GENCI@CEA, France)
Evolution of the corona above an emerging active region.
Project leader: Hardi Peter, Max Planck Institute for Solar System Research, Germany
Collaborators: Sven Bingert, Max Planck Institute for Solar System Research, Germany | Feng Chen, Max Planck Institute for Solar System Research, Germany
AbstractThe heating of the corona of cool stars and the Sun is a highly interesting problem in the framework of astrophysics and plasma physics. While numerous theoretical suggestions exist to explain the hot temperatures of coronae, only few forward models account for the complex three-dimensional and timedependent nature of the structures seen in the corona. These models are three-dimensional magnetohydrodynamics (3D MHD) models that solve an induction equation for the magnetic field together with the mass, momentum and energy balance. For comparison with actual observations we synthesize the coronal emission in a forward model fashion. In our models photospheric motions braid magnetic field lines inducing currents in the upper atmosphere, which are dissipated and subsequently heat the plasma. The aim of this project is twofold. In one part we want to combine such a 3D MHD corona model for the first time to a numerical model of emerging flux of a solar active region in order to simulate how the corona above an active region builds up and evolves while a pair of sunspots is forming on the solar surface. Here we will use an existing model of the formation of an active region (based on Cheung et al., 2010, ApJ 720, 233) as the input for our coronal model. We will use the surface layers of that simulation as input for the lower boundary in our coronal model. Two-dimensional test runs showed promising results for this process. We will perform the 3D MHD coronal simulation of the evolving active region from the appearance of the first strong magnetic patches at the surface until the sunspots have been fully formed. This numerical model would provide a first unifying approach to describe the emergence of an active region from below the solar surface up into the corona. In the second part of this project we will address the pivotal question of the role of the parameterization of the magnetic resistivity on the dynamics of
the evolving corona. In a series of computations accompanying the models of the first part we will investigate the influence of different parameterizations of the magnetic resistivity on the dynamics and structure of the corona. The magnetic resistivity will be varied from magnetic Reynolds numbers of 1 to 100. Additionally we parameterize the magnetic resistivity temperature dependent (Spitzer) and use hyper-resistivity. These simulations are of high interest for a proper interpretation of the numerical experiments done so far for the heating of the solar corona. For our numerical studies we will employ the Pencil code, a high-order modular 3D MHD code that we have adapted for coronal problems. Used by a large astrophysical community this code is well suited for massively parallelized applications and has been run by our group at various Tier-0 systems (e.g. Curie, Hermit).
Resource awarded: 8 600 000 core-hours on SuperMUC (GAUSS@LRZ, Germany)
ENCORASOL – Engineering transition-metal multi-core centers in molecularcatalysts for solar fuel production
Project leader: Simone Piccinin, CNR-IOM Democritos and SISSA, Italy
Collaborators: Sara Furlan, CNR-IOM Democritos and SISSA, Italy | Changru Ma, SISSA, Italy
AbstractReplacing fossil fuels with renewable energy sources like solar and wind requires developing a strategy to cope with the intrinsic variability of these sources. To this end, the strategy selected by Nature is photosynthesis, were sunlight promotes a series of electrochemical reactions starting from H2O and CO2, producing sugars and releasing O2 as a byproduct. Mimicking this process with artificial devices would allow storing solar energy in the form of chemical fuels. The proposed project addresses one of the main bottlenecks for artificial solar fuel production, namely the lack an efficient, stable and cheap catalytic material for the anodic reaction [1,2]. This is the oxidation of water that leads to the evolution of O2 and to creation of electrons and protons. A common feature shared by all water-oxidation catalysts is the presence of an elementary octahedral structural unit formed by a transition metal ion coordinated by six O atoms (TM-O6). This common motif is found both in heterogeneous catalysts (RuO2, IrO2, CoOx surfaces and nanoparticles), in multi-core molecular catalysts (Ru and Co polyoxometalates, amorphous Co-phosphate, “blue dimer”) and even in single-center molecular catalysts . These materials differ, however, in the number of active octahedral and/or in the actual arrangement of the TM-O6 units. The goal of this project is to understand the role played by the nature, the number and the arrangement of the TM-O6 octahedra on the performance of the catalyst, and in particular we will try to establish correlations between the electronic structure of the TM center and its catalytic activity. To this end we will perform DFT simulations of the catalytic cycle promoted by polyoxometalates [3,4], a class of inorganic molecular catalysts that can be synthesized with various types and numbers on TM ions [3,4]. We will focus on 3rd-row TM elements (Mn, Fe, Co, Ni), given their abundance and their catalytic activity towards water oxidation both in the heterogeneous and homogeneous phase. Hybrid functionals for exchange and correlation are necessary to properly describe the change of oxidation state of these TM elements , making these large scale simulations extremely demanding and thus requiring the large computational infrastructure provided by PRACE.
Resource awarded: 12 000 000 core-hours on Hermit (GAUSS@HLRS, Germany)
CHIRSIM-From single molecule to macromolecular chirality: probing themechanism of the photoinduced chiral-to-achiral transition of a fluorene-basedpolymer film
Project leader: Adriana Pietropaolo, Universita’ di Catanzaro, Italy
Collaborators: Tamaki Nakano, Hokkaido University, Japan
AbstractMolecular switches based on helical polymers are functional organic frameworks with a subtle control of local spatial arrangement. In this project we aim at elucidating the mechanism of chirality induction of a fluorene based-polymer (PDOF) whose chirality is selectively induced by circularly polarized light. PDOF represents a light-emitting and hole-transport material for organic light-emitting diodes, without any chirality center. The elucidation of the chirality induced mechanism is attractive since it can allow understanding how to modify the intrinsic aggregate stucture of the macromolecule, thus controlling the electronic and optical properties in nanoscale materials. We plan to reconstruct the free energy surface corresponding to the chirality transition of PDOF selfassembling by using Parallel Tempering Metadynamics in the well tempered ensemble (PTMetaD-WTE), followed by PTMetaD using the chirality metrics. PTMetaD is an extremely efficient parallel method for the calculation of the free energy as a function of one or more collective variables (CVs). We will use as CV a bespoke chirality descriptor, recently introduced in metadynamics. The elucidation of the chirality inversion mechanism can draw a new strategy to develop artificial helical polymers and supramolecules with a controlled handedness. This is crucial for sensing specific molecules, the separation of enantiomers, and asymmetric catalysis.
Resource awarded: 24 000 000 core-hours on FERMI (Cineca, Italy)
ENSING – Euler and Navier-Stokes SINGularities
Project leader: Sergio Pirozzoli, Sapienza, University of Rome, Italy
Collaborators: Paolo Orlandi, Sapienza, University of Rome, Italy | Matteo Bernardini, University of Rome, Italy
AbstractThis research aims at consolidating our notions on the possible existence of a finite-time-singularity (FTS) for the Euler and the Navier-Stokes equations, through numerical simulations. In the absence of a solid mathematical proof, numerical simulations can provide invaluable insight into the subject. In this research we aim at performing extremely large-scale simulations of initial-value problems whereby two Lamb vortex dipoles are made to collide, trying to get as close as possible to the hypothesized FTS, and certainly much closer that previous, lower-resolution studies. Closely related to the FTS issue is the issue of the reasons underlying the formation of Kolmogorov’s k^(-5/3) energy spectral range. We expect that initial-value Navier-Stokes simulations carried out past the FTS time will be ablo to shed some light into this issue, and in particular to show which turbulence structures are responsible for the onset of Kolmogorov’s inertial scaling.
Resource awarded: 50 000 000 core-hours on FERMI (Cineca, Italy)
MDFluMem: MD Simulations of Large Membrane Systems from Membrane ProteinArrays to the Influenza Virus
Project leader: Mark Sansom, University of Oxford, United Kingdom
Collaborators: Joseph Goose, University of Oxford, United Kingdom | Tyler Reddy, University of Oxford, United Kingdom
AbstractMembrane proteins account for 25% of all genes and they are involved in various diseases ranging from diabetes to cancer. Membrane proteins play a key role in the biology of infection by pathogens including both bacteria and enveloped viruses such as influenza. They also play an important role in many cellular processes such as signal transduction, transport and cell-cell interactions. It is therefore not surprising that membrane proteins are major targets for a wide range of drugs and other therapeutic agents. Recently, the number of known structures of membrane proteins has started to increase, due to advances in their expresssion and structural biology. However, the conformational dynamics of the static structure needs to be studied further to better understand their biological function. Molecular dynamics simulations allow the study of the dynamics of proteins in their native membrane environments. However, biologically realistic simulations of membrane proteins require calculations on millions of atoms and therefore the use of supercomputers, like those provided through PRACE, are essential for studying their complex dynamic behaviour.
Resource awarded: 22 200 000 core-hours on Hermit (GAUSS@HLRS, Germany)
Flavor Singlets in Lattice QCD: Sea Quark and Gluon Content of Hadrons
Project leader: Gerrit Schierholz, Deutsches Elektronen-Synchrotron DESY, Germany
Collaborators: Paul Rakow, University of Liverpool, United Kingdom | Raffaele Millo, University of Liverpool, United Kingdom | Roger Horsley, University of Edinburgh, United Kingdom | Frank Winter, University of Edinburgh, United Kingdom | James Zanotti, University of Adelaide, Australia | Dirk Pleiter, JSC, Germany | Yoshifumi Nakamura, Rike, Japan | Holger Perlt, University of Leipzig, Germany | Arwed Schiller, University of Leipzig, Germany | Hinnerk Stuben, University of Hamburg, Germany
AbstractVery little is known theoretically about the sea quark and gluon content of hadrons, which play a key role in our understanding of the structure and interactions of hadrons. This includes polarized and unpolarized gluon and quark distribution functions. To determine the production rate of the Higgs boson at the LHC, for example, one needs to know the gluon distribution function in the proton precisely. Another question of long-standing interest is the composition of the hadrons’ spin in terms of their quark and gluon constituents. The sea quark and gluon content of hadrons can be traced back to flavor singlet hadron matrix elements, following the OPE. The calculation of these matrix elements, also referred to as quark-line disconnected diagrams, is one of the greatest technical challenges left in lattice QCD. This is due to the fact that the lattice calculation of disconnected diagrams is extremely noisy and gives a poor signal In this proposal we will employ a new method, based on the Feynman-Hellmann theorem, to compute flavor singlet matrix elements of selected quark and gluon operators, which eliminates the issue of disconnected contributions at the expense of requiring the generation of additional ensembles of gauge field configurations. This essentially involves computing two-point correlators in the presence of generalized background fields, which we show arise from introducing an additional operator into the action.
Resource awarded: 16 000 000 core-hours on FERMI (Cineca, Italy), and 4 200 000 core-hours on JUQUEEN (GAUSS@FZJ, Germany)
INCOME4WINDFARMS – Innovative Computational Methods for Wind Farms
Project leader: Paolo Schito, Politecnico di Milano, Italy
Collaborators: Raffaele Ponzini, CILEA, Italy | Alice Invernizzi, CILEA, Italy | Alberto Zasso, Politecnico di Milano, Italy | Catherine Gorlé, Stanford University, United States
AbstractThis project concerns a verification and validation process of a CAE methodology applied to High-Performance Computing for the design and optimization of wind turbines and wind farms. The overall computational requests rely on the full workflow that is necessary to fulfill the scientific/technical methodology that merges experimental measurement techniques to more advanced numerical ones and that must be considered as a whole. Analyzing the requests of the numerical computation workflow explained in the Additional Mandatory Material file enclosed to the submission, it’s evident that for each job there are two kind of main tasks: the computation of the flow field (parallel and distributed memory) and the post-process (memory intensive). In synthesis, as the numerical simulation get to a converged computed flow time instant, before proceeding to the next time-step, a PBS script is generated and submitted to the queuing system in order to manage, in a separate job, the postprocessing. In this manner, at each considered time-step, the post-process can achieve an efficient disk space saving using the VTK file format, an elaboration using paraView, visualizing and finally storing only the data of interest (usually iso-surfaces and sliced planes) and deleting the OpenFoam temporary files (obtaining about 99% or disk space reduction). The average numerical calculation uses 1024 cores and every 4 time-steps (estimated 80 seconds of wall-clock time) a new post-processing job is launched. This process uses a single core and runs for approximatively 0.5 hours using between 50GB and 100GB, depending on the size of the problem. The memory intensive post-processing job is the bottleneck of our request concerning the ’Fat Node’ configuration of the CURIE machine. The expectation is therefore to have both the computation of the flow field in OpenFoam (from 512 to 2048 parallel cores) and several post-process jobs running at the same time. The strategy used in previous runs (performing calculation on smaller problem dimensions) is to dedicate part of a multi-core node to the parallel calculation, and the remaining part for the post-processing. This strategy allows using the computational resources as exclusive user, minimizing the total number of nodes occupancy and the interference with other users. This strategy has been then applied to larger cases submitting the postprocessing jobs to a Hewlett Packard DL980 machine equipped with 512 GB of RAM at the CILEA computer centre. We are expecting to obtain a similar working configuration using the ’Fat Node’ at CURIE. We are at your disposal to discuss the best strategy to exploit the CURIE characteristics by adopting an hybrid working procedure that launch the computational jobs under the Thin Node partition and the post-processing to the Fat Node.
Resource awarded: 10 500 000 core-hours on Curie FN (GENCI@CEA, France)
Fluctuations of Conserved Charges in the Quark Gluon Plasma
Project leader: Christian Schmidt, Universitaet Bielefeld, Germany
Collaborators: Olaf Kaczmarek, Universitaet Bielefeld, Germany | Edwin Laermann, Universitaet Bielefeld, Germany | Marcel Mueller, Universitaet Bielefeld, Germany | Frithjof Karsch, Universitaet Bielefeld, Germany | Wolfgang Soeldner, Universitaet Regensburg, Germany | Peter Petreczky, Brookhaven National Laboratory, United States
AbstractFluctuations of Conserved Charges in the hot Quark Gluon Plasma are well suited to study the universal critical behavior of Quantum Chromodynamics (QCD) and, even more important, can be directly compared to measurements of heavy ion collisions, which are currently performed at the Large Hadron Collider (LHC) at CERN, Switzerland, and the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory, USA. Such a comparison will directly relate results of lattice QCD simulations to the freeze-out parameters of the fireball that is produced in heavy ion collisions. Experimentally very clean observables are provided by the electric charge fluctuations. Unfortunately for lattice QCD electric charge fluctuations are not so easy since they are very sensitive to the light pion sector. Due to discretization effects, the pion spectrum is distorted on the lattice, unless one goes to very fine lattices. Improved lattice actions help to reduce this problem. It has been shown that the highly improved staggered quark action (HISQ) is the most advanced staggered type action in this respect. We propose to generate fine-grained gauge field lattices of size 64^3×16, using the HISQ action and physical quark/pion masses, close to the QCD transition temperature, which are subsequently used to calculate various cumulants of electric charge fluctuations and of other conserved charges. We intend to show that, together with results obtained earlier on 48^3×12, 32^3×8 and 24^3×6 lattices, a well controlled continuum extrapolation of electric charge fluctuations is possible. The continuum extrapolated results can then be used to determine freeze-out parameters in heavy ion collisions.
Resource awarded: 26 000 000 core-hours on FERMI (Cineca, Italy), and 41 000 000 core-hours on JUQUEEN (GAUSS@FZJ, Germany)
TRANCE-Correlation driven phase transitions in Cerium and Cerium-bearing metallic glasses: an ab-initio perspective by quantum Monte Carlo
Project leader: Sandro Sorella, SISSA, Italy
Collaborators: Michele Casula, Université Pierre et Marie Curie, France | Matteo Calandra, Université Pierre et Marie Curie, France | Guglielmo Mazzola, SISSA, Italy
AbstractIn this project we aim to investigate the volume collapse transition in Cerium and its glassy alloys, by employing the latest (and very recent) developments in density functional theory (DFT) and variational quantum Monte Carlo (VMC) calculations. This combination of various ab-initio methodologies will allow us to analyze the problem under a unified picture although taken from different perspectives, with the final goal of shedding light on the polyamorphism of Cerium alloyed metallic glasses. The spectral properties, the energetics, and the entropy will be studied in elemental Cerium with the aim of deriving a free energy functional description applicable also to the alloys. This will provide new insights in the interpretation of the latest experimental thermodynamics and phonon data for the alpha-gamma transition. From the methodological point of view elemental Cerium is a unique benchmark system, with extensive experimental data available, as it represents the paradigm for correlated f-electron materials. Thus it offers the possibility to test a recently proposed variational Monte Carlo algorithm which exploits the variational freedom of the wave function to explore the entropy contribution of both ions and electrons, by accessing finite temperature properties via an innovative extended molecular dynamics. This new approach can then be used not only to study the Ce-bearing glassy alloys, but also many other systems, where temperature effects of both ion and electron entropies are crucial to understand their physical properties and phases. If the volume collapse polyamorphism were confirmed in the glassy alloys, it would open up a new area for future research, as the Ce-bearing glasses could be designed and functionalized on the f-electron correlation basis, with many potential applications for their appealing transport properties and mechanical features.
Resource awarded: 17 000 000 core-hours on Curie TN (GENCI@CEA, France)
MEMOIR- Multiferroic and magnetoElectric Metal OrganIc framewoRks
Project leader: Alessandro Stroppa, CNR-SPIN, Italy
Collaborators: Silvia Picozzi, CNR-SPIN, Italy | Paolo Barone, CNR-SPIN, Italy | Domenico Di Sante, University of L’Aquila, Italy | Evgeny Plekhanov, CNR, Italy
AbstractMetal Organic Frameworks (MOFs) are hybrid crystalline materials made up of both inorganic and organic structural elements. They are driving enormous interest not only for many interesting properties, but also owing to the large variety of molecular topologies, modifications of the organic units, useful for functionalizing specific material properties. Research in MOFs is currently gaining great interest from both Chemistry and Condensed Matter physics communities. A class of dense MOFs with ABX3 perovskite topology is particularly appealing since shows eye-catching properties in areas that have traditionally been dominated by inorganic materials, like magnetism and ferroelectricity. The coexistence of magnetic and ferroelectric order in the same material, i.e. multiferroicity (MF), is of great technological and fundamental importance, in particular when both orders are coupled, i.e. magneto-electric coupling. Multiferroic materials represent one of the main current hot topic in materials science. Despite the large activity devoted to multiferroics, most of the past and current studies have focussed on inorganic compounds, mainly in the family of perovskite- like oxides. On the other hand, there is a growing expectation that MF in MOFs should show unprecedented properties, not fully realized in standard inorganic compounds, opening a curnucopia of new horizons. A few experimental studies suggesting multiferroicity in MOFs started to emerge, but, to our knowledge, theoretical simulations about MF MOFs are an almost totally unexplored field. Ab-initio calculations represent a powerful tool for investigating the microscopic origins of ferroelectricity and magnetism, and for attempting to unveil their coupling, i.e. magnetoelectric (ME) coupling. In this project, we aim to explore “emerging routes” to MFs, to set-up guiding lines in the plethora of different degree of freedoms active in magnetic MOFs for a rational design of MOFs with enhanced ferroelectric and magnetic response. Contrary to the inorganic counterpart of multiferroics, such studies are completely lacking in the current literature. We have recently shown by state-of-the-art density functional theory [A. Stroppa, et al. Angew Chemie Int. Ed. 50,5847 (2011)] that a particular member of this new family of compounds show very interesting multiferroic properties strictly related to the the organic-
inorganic duality. Certainly, there is plenty of room for further theoretical investigations in MF and ME MOFs. The large number of atoms in the unit cell (several hundreds), the complexity of the systems (organic and inorganic structural units), the complex magnetic structure involved (non collinear magnetism and spin-orbit coupling) and the advanced computational techniques we are going to use (like hybrid functionals and GW calculations) require the use of massively parallel architectures.
Resource awarded: 15 000 000 core-hours on FERMI (Cineca, Italy)
Electrophysiology – Atomistic modeling
Project leader: Mounir Tarek, CNRS-Universite de Lorraine, France
Collaborators: Marina Kasimova, CNRS-Universite de Lorraine, France | Lucie Delemotte, Temple University, United States
AbstractExcitable cells such as neurons respond to electrical simulation by creating and propagating small electrical currents that are mediated by the transport of ions across the membrane by proteins called ion channels. Among them, voltage-gated ion channels (VGCs) close and open to transport ions in response to changes in the polarization state of the membrane and are involved in the transport of the nervous impulse along the axon, the extension of neurons used to communicate with other cells. On the other hand, gap junction channels (GJCs) or connexins are the channels that transport the electrochemical message that has traveled the length of the axon to the adjacent neuron in electrical synapses. They too, close and open in response to changes in the transjunctional voltage. Both categories of channels are sensitive to external factors like the alteration of the membrane embedding them, or the presence of drug-like molecules and are affected by mutations in their genes that cause so-called channelopathies or channel dysfunction. As such, specific members of these families are primary pharmacological targets. In this multiyear project, we propose to investigate at a molecular level using atomistic molecular dynamics (MD) simulations various aspects of the function of these two categories of channels involved in the propagation of the nervous impulse, with the scope to contribute to the design of drugs that will modulate their activity. Previous work has allowed us to uncover the deactivation pathway of a representative of the potassium VGC family, the Kv1.2, the sole channel for which a high resolution crystal structure is available and to characterize the voltagedependent parameters associated to its function. Here, we propose to build on this to conceive for the first time an in silico kinetic model of the function of selected channels, in order to retrieve the current/voltage relationships that are characteristic of VGC function and can be compared directly to electrophysiology recordings. To do so, we propose a sophisticated methodology involving free energy calculations and estimation of the diffusive properties of the deactivation process, which will enable to retrieve the kinetic constants. We will then study the effect of different modulators such as the influence of the lipids embedding the channels or else of mutations in their sequence on the kinetic properties. In a second step, carried out in collaboration with electrophysiologists, we propose to investigate the effect of short peptides that were found to be therapeutical agents for the Kv7.1 heart channel. Using free energy protocols, we propose to suggest mutations that will increase the binding affinity of these peptides to improve their binding. Finally, we will study the molecular level function of the GJC Cx26, the only member of the connexin family for which a crystal structure is available. Using a protocol designed previously by our group to apply a transjunctional voltage, we propose to uncover the response mechanism, to observe the opening and closing (gating) of the channel and to identify meaningful residues involved in voltage sensing. This work has the long-term objective to contribute to design new ways to modulate the function of GJC with potential applications in pharmacology.
Resource awarded: 28 000 000 core-hours on Curie TN (GENCI@CEA, France), and 42 000 000 core-hours on SuperMUC (GAUSS@LRZ, Germany)
DySMoMAu – Dynamics of Single Molecular Magnets grafted on Gold Surface
Project leader: Federico Totti, University of Florence, Italy
Collaborators: Marcella Iannuzzi, Universität Zürich, Germany | Alessandro Lunghi, University of Florence, Italy | Silviya Ninova, University of Florence, Italy
AbstractRecently, a proof of concept that the magnetic memory effect is retained when SMMs are chemically anchored to a metallic surface1 was provided. However, control of the nanoscale organization of these complex systems is required for SMMs to be integrated into molecular spintronic devices. A preferential orientation of Fe4 complexes on a gold surface is needed and it can be achieved by chemical tailoring. PI’s group has successfully shown that a Fe 4 C 5 complex grafted onto the Au(111) with the insertion of the an aliphatic chain of five carbon atoms functionalized with a -SR group conserved the most striking quantum feature of SMMs: their stepped hysteresis loop, which results from resonant quantum tunneling of the magnetization. The preferential orientation of Fe4 magnetization easy axis inside a cone of 35° angle with respect the normal to the surface turned out to be critical. DFT static calculations confirmed the experimental findings supporting the XMCD data fit, which indicates the cone where the SMM is supposed to move. However, since the Fe4 core is grafted to the Au(111) surface through the highly flexible aliphatic chain, the dynamical evolution of the structure at finite temperature might induce changes in the direction of the magnetic axis. The description of these processes by ab initio molecular dynamics (AIMD) simulations in a multi-force-environment framework could provide the missing information needed to reduce the number of unknowns in the fit of the magnetic data. Once the sampling of the configurational space is available, we plan to monitor possible variations of the approach used in the previous investigation of the static properties of the system, i.e. through the broken symmetry approach. By this analysis, we are going to verify whether structural changes in the core region can alter somehow the established exchange pathways and consequently the whole magnetic behavior. The study through AIMD of organic radicals grafted on Au(111) would represent a kind of benchmark and workout for the more complex Fe4 system. Compounds like nitronyl nitroxides with thiol groups, NitRs where R is CH2-SCH3, provide exciting opportunities for investigating the properties of magnetic SAMs on gold surfaces. From previous studies done by one of us in a DFT stationary approach, the NitRs seem to form straight chains acting as superexchange highways. The grafting on the Au(111) surface has no steric or bond restraint,therefore, Nits could free rotate in a dynamic fashion leading to mobile superexchange pathways and arrangements.Since these systems are simpler and less computational demanding, we are going to use them also validate thesampling strategy described above, i.e. selecting a limited number of snapshots along the trajectory to associatestructural changes to variations in the magnetic
behavior. The snapshots will be chosen where the multi-forceenvironmentapproach will show interesting magnetic structures due to relevant geometrical arrangements.To the best of our knowledge, the work proposed here is going to be the first investigation based on AIMD on SMMsgrafted on surfaces.
Resource awarded: 30 000 000 core-hours on FERMI (Cineca, Italy)
FERMI – FERMI shedding a new light on Fermi processes in filaments of the Cosmic Web
Project leader: Franco Vazza, Radio Astronomy Institute, Italy
Collaborators: Claudio Gheller, CSCS Lugano, Switzerland | Jean Favre, CSCS Lugano, Switzerland | Marcus Bruggen, Jacobs University Bremen, Germany | Gianfranco Brunetti, Radio Astronomy Institute, Italy
AbstractWe will study with unprecedented large dynamical range in grid simulations (2500^3 cells for a volume of side 180 Mpc) the evolution of large scale structures, with a version of ENZO 2.1 (e.g. Bryan & Norman 1995; Norman et al.2007; Collins et al. 2010) with implementations by our group (Vazza et al.2012). For the first time ever so far we will model in the same simulation the run-time effects of cosmic rays (injected at shocks via “Fermi I” processes), radiative cooling and energy feedback from active galactic nuclei on the thermodynamical structure of large scale filaments of the cosmic web, and compute observables related to the above processes. Our run will be designed in order to achieve the best scientific impact, in resolution (50 kpc), statistics (>1000 massive filaments) and physical complexity in the description of large scale filaments in the Cosmic Web from the early to the present cosmic epoch. This study aims at having a great impact on the theoretical knowledge of the dynamics of cosmic baryons outside of the high density routinely probed by X-ray/radio/optical observations (and under the focus of most of cosmological simulations), particularly in the case of large scale cosmic filaments. Owing to the investigated thermal and non-thermal physical processes of run, the observational properties of filaments will be characterized at different wavelengths, allowing a range of forecasts useful for future large area surveys by international collaborations (e.g. the gamma-ray FERMI satellite, the low-frequency radio arrays LOFAR and SKA, the optical satellite EUCLID). Given the existing uncertainty on the relevant scales for plasma physics around and within filaments (which in turn is related to the poor knowledge of the level of magnetization of filaments, and to its effect in reducing the effective mean free path of baryon gas), a statistical comparison between our data and available/incoming observations will likely provide robust constraints on the plasma parameters at these scales. For this task, we will use our original physical implementations for cosmic rays physics, built on top of the public 2.1 version of ENZO (e.g. Norman et al.2007; Collins et al.2010), with customized optimization for best performances on the Blue GeneQ architecture. Our project requires in total 7 000 000 CPU hours, distributed in about 1000 wall clock hours using from 1000 to 8000 cores.
Resource awarded: 1 800 000 core-hours on Curie TN (GENCI@CEA, France)
DIMAIM – DIslocations in Metals using Ab Initio Methods
Project leader: Francois Willaime, CEA, France
Collaborators: Lisa Ventelon, CEA, France | Lucile Dezerald, CEA, France | Emmanuel Clouet, CEA, France | Mihai-Cosmin Marinica, CEA, France | Nermine Chaari, CEA, France | David Rodney, Grenoble Institute of Technology, France
AbstractThis work is in keeping with the general pattern of multiscale modelling of the mechanical properties of materials for fission and fusion energy systems starting from quantum atomistic calculations. The plastic deformation of crystalline materials is governed by the behaviour of dislocations, line defects in the crystalline lattice. Quantitative modelling of dislocation properties requires describing interatomic bonding at the electronic structure level. The Service de Recherches de Métallurgie Physique (SRMP) of CEA/Saclay in France has developed an expertise on the application of ab initio electronic structure calculations of DFT type (Density Functional Theory) to large systems, in order to describe defects in materials, be it irradiation defects or dislocations. The present proposal is focused on the DFT study of dislocations in body centered cubic (bcc) transition metals (V, Nb, Ta, Cr, Mo, W and Fe). These metals form the basis of an important class of structural materials, going from ferritic steels to refractory alloys. These materials are either used or considered for pressurized water reactor vessels, fuel claddings of sodium cooled fast neutron reactors, or first wall/blanket structures and divertor of fusion devices. In these metals, the electronic structure plays a predominant role, owing to the presence of a marked pseudogap in the electronic density of states, and this explains why classical simulations are not sufficient. The objective of this work is to provide in bcc transition metals a quantitative description from first principles of the energy landscape seen by dislocations, including in the material under stress. Properties such as Peierls potential, glide planes, Peierls stress, deviation from Schmid law and effect of solutes will be determined. These calculations require both a high accuracy and large supercells (100-300 atoms) ; they therefore need to be performed on massively parallel computers. After having successively studied dislocations in iron, the idea is to perform a systematics on all the bcc metals in order to investigate the metal dependence of dislocation properties, in relation with experimental observations at the macroscopic scale. These properties are indeed key input data for larger scale simulations, such as dislocation dynamics. This approach may serve as a guide to propose new alloy compositions, in order to improve materials performance.
Resource awarded: 20 000 000 core-hours on MareNostrum (BSC, Spain)
Data analyses of the CURIE High-redshift simulations
Project leader: Gustavo Yepes, Universidad Autonoma de Madrid, Spain
Collaborators: Alexander Knebe, Universidad Autonoma de Madrid, Spain | Tobias Goerdt, Universidad Autonoma de Madrid, Spain | Daniel Ceverino, Universidad Autonoma de Madrid, Spain | Stefan Gottloeber, Leibniz-Institute for Astrophysics Potsdam (AIP), Germany | Jaime Forero-Romero, University of California at Berkeley, United States | Francisco Prada, University of California at Berkeley, United States
AbstractIn a previous PRACE project with reference number 2010030317, we proposed to make simulations to follow the formation and evolution of a volume limited sample of dark matter halos which can host Lyman alpha emmitting and Ly-Break galaxies up to redshift 3. Due to the extreme resolution needed to resolve the proper physics of the gas cooling and star formation processes ( of the order of 10s of parsecs), it is
still not feasible to simulate a whole cosmological volume of hundred of Megaparsecs down to this resolution. On the other hand, thanks to the recent developments in generation of initial conditions, we can select certain objects formed in a low-resolution cosmological simulation, and resimulate them with very high resolution, by using a zooming technique that resamples the fluids with particles of variable masses within the area from which the object is formed. But, in order to identify the objects and to study their properties and evolution, we had to simulate a full box, bith with dark matter and with gasdynamics We have completed several of these simulations intended or identification of objects at different redshfits. We have simulated a 200 Mpc box with variable number of particles. A 512^3 and 1024^3 dark matter only runs were first made to identify the population of objects. Later, we have been able to simulate a full hydrodynamical run with 2x 1024^3 gas+dark matter particles all the way from z=100 to z=0. Radiative cooling + star formationa and supernova feedback has been taken into account. A total of 280 different snapshots of this simulations were stored in all these simulations for further analysis. Each snapshot takes roughly 50-112 GBytes of raw data that has to be processed. The main task is to run the halo finder to identify all collapsed structures in each of the snapshots. We use the AMIGA HALO FINDER (AHF) MPI/OpenMP code for this purpouse. Due to cpu limitations, we have only analized half of the 280 snapshots stored. Once we have the full dataset analyzed we will be able to accurately produce merger trees for each object identified in the simulation. This will allow us to apply the results of our simulation for many otherinteresting project apart from those for which these runs were specifically designed.Last, but not least, we also performed a larger run with 3840^3 dark matter only particles. For this run we stored 80snapshots, each of them with 1.7 Tbytes of raw data. As in the previous cases, we decided to just analyse 10 of thesnapshots using the AHF code with 4096 cores and a typical wall clock time of 10-12 wall clock hours.Thus, to complete the data analysis of all the simulation outputs we would need to run AHF in the remainingsnapshots.
Resource awarded: 2 500 000 core-hours on Curie TN (GENCI@CEA, France)
Mocking the Universe: Large Volume Cosmological Simulations for Galaxy Surveys
Project leader: Gustavo Yepes, Universidad Autonoma de Madrid, Spain
Collaborators: Anatoly Klypin, New Mexico State University, United States | Francisco Prada, Consejo Superior de Investigaciones Cientificas, Spain | Stefan Gottloeber, Leibniz-Institute for Astrophysics Potsdam (AIP), Germany | Steffen Hess, Leibniz-Institute for Astrophysics Potsdam (AIP), Germany
AbstractIn the next years there will be a substantial number of observational projects that will be gathering photometric and redshift data from galaxies at different epochs in the universe, covering a large area of the sky (e.g BOSS, BigBOSS; DES, KIDS, Euclid, eROSITA, etc). The cartography of galaxies and dark matter distribution in the Universe will be much more accurate and consequently, will put strong limits to the viable cosmological model of the Universe. But, in order to derive cosmological constraints from these data, they must be compared with the theoretical predictions of the different cosmological scenarios. The only possible way to do that is by simulating large computational volumes with enough resolution to accurately resolve the same objects than in the galaxy surveys. In this project, we propose to make a set of N-body simulations in gigaparsec scale volumes with 100’s of billions of particles to resolve halos with circular velocities > 200 km/s. Using the Abundance Matching technique, we have recently shown (see Nuza et al paper arxiv1202.6057) that we can accurately reproduce the clustering properties of LRG’s in the BOSS survey using the MULTIDARK dark matter only simulation . Thus, for the new runs we proose to make in PRACE, we will be able to extract full volume mock galaxy catalogues for current observational experiments BOSS and KIDS and make predicitons for future ones like Euclid, BigBOSS, etc. The mock catalogues extracted from these simulations will be publicly released through web-based databases and offered to the scientific astronomical community.
Resource awarded: 22 000 000 core-hours on SuperMUC (GAUSS@LRZ, Germany)