Projects from the following research areas:
Biochemistry, Bioinformatics and Life sciences (8)
MemAlloReg: Understanding the role of the membrane in the allosteric regulation of a cancer-related receptor kinase.
Project Title: MemAlloReg: Understanding the role of the membrane in the allosteric regulation of a cancer-related receptor kinase
Project Leader: Francesco Luigi Gervasio, UK
Resource Awarded: 17876000 core hours on MareNostrum
Receptor Tyrosine Kinases (RTK) are among the most important targets for anti-cancer therapy. A large family of cell surface receptors, RTKs regulate critical cellular processes, such as proliferation, differentiation, cell survival and migration. Mutations in RTKs and aberrant activation of their intracellular signalling pathways are among the most common causes of cancer. These connections have recently driven a vigorous R&D activity to develop anti-cancer drugs that block or modulate RTK activity. Unfortunately, most approved drugs are ATP-competitive inhibitors and suffer from the emergence of drug-resistant mutations or off-target toxicity. For this reason novel drugs with a different mode of action, such as extracellular allosteric inhibitors, have been actively sought after. We have recently used advanced (enhanced-sampling) molecular simulations on a supercomputer to help the development of the first allosteric inhibitor of a RTK (SSR). The results have been published on 2 back to back papers in Cancer Cell (the most important journal in cancer) and we are now further studying this drug to understand the interesting biophysics behind its surprising mode of action. Indeed, the mode of action of this effective inhibitor is novel. We have convincing experimental evidence showing that it modulates the interactions with the membrane. In this project we will explore these surprising effects in unprecedented details. This will not only help develop better derivative of SSR, but it might also pave the way to a new class of anticancer allosteric inhibitors of RTKs.
CellPhy-Diffusion and Stability of Proteins in Cell-like environments.
Project Title: CellPhy-Diffusion and Stability of Proteins in Cell-like environments
Project Leader: Fabio Sterpone, FR
Resource Awarded: 20828160 core hours on Marconi-KNL
CNRS – FR
CNR – IT
University of North Caroline – USA
Proteins work in an extremely heterogeneous and crowded environment. Indeed 5 to 40 % of the intracellular total volume is occupied by large biomolecules. Nowadays, it is well accepted that macromolecular crowding is essential in determining the behavior of proteins in the cells. Despite many experimental and computational investigations have been performed to evaluate the role of macromolecular crowding, the extent and the atomic details of this effect are not fully understood. The experimental manipulation of the intracellular milieu is very challenging, as well as the in silico characterization of systems with such a large size and spread of length- and time-scales. The aim of this project is the study of proteins properties in cell-like environment using an innovative multi-scale simulation methodology that combines the quasi-atomistic coarse-grained model for protein OPEP (Optimized Potential for Efficient protein structure Prediction) and an efficient treatment of hydrodynamic interactions. This approach offers the unique opportunity to explicitly treat proteins internal flexibility and couple it to hydrodynamic interactions. It has high computational scalability that allows the study of systems with unprecedented size. This approach will be complemented by an innovative variant of the Replica Exchange methodology that can be deployed to investigate thermal stability of systems of large size. Considering selected study-cases inspired by our experimental collaborator, the role of local solution environments on proteins mobility and stability will be evaluated. The success of this project will show the potentiality of our multi-physics multi-scale approach towards the investigation at the microscopic resolution of the cellular environment. This will open the door to new and original investigations of the molecular aggregation at the origin of neurodegenerative disease, the diffusion of drugs in cell, the control of protein chain interactions, clustering, and functional cascade in cell-like condition. The project was favourable reviewed during call 12th with notes 6/6/5. The referees comments and our reply in that submission (proposal 2015133129) can be access in the webspace as requested by the reviewing commette.
Identification of virus in 2700 PanCancer genomes using an improved version of SMUFIN.
Project Title: Identification of virus in 2700 PanCancer genomes using an improved version of SMUFIN
Project Leader: David Torrents, ES
Resource Awarded: 5529600 core hours on MareNostrum
Team Members :
Barcelona Supercomputing Center – ES
As a natural evolution of the activity of our group for the past years in relation with the analysis of cancer genomes, this proposal also aims at contributing to the understanding of the genomic basis of tumor formation and progression. Specifically, and similarly as we have done recently (Puente et al, Nature 2015), our goal is to apply new improvements done in our software SMUFIN (Moncunill et al, Nature Biotechnology 2014) to identify viral integration events among 2700 genomes of different types of tumors, which are included within the ICGC-PanCance project (http://pancancer.info and www.icgc.org). In collaboration with in-house computer groups, we are modifying SMUFIN in different ways with aim of improving the detection possibilities of somatic events, and specifically entered in the detection of more types of events. In this case, we have found that a few modifications of the software allows us to find viral integration events with better accuracy and recall than the alternative methods, which are based on prealignment steps of the reads onto a reference genome. We already have preliminary results that show the new potentials of this new version of SMUFIN applied to tumor genomes. To be able to analyse the 2700 PanCancer genomes, already in-house, with the new SMUFIN, we solicite PRACE computing resources.
Effects of point mutations on the activation of p38α MAPK.
Project Title: Effects of point mutations on the activation of p38α MAPK
Project Leader: Modesto Orozco, ES
Resource Awarded: 31000000 core hours on Marconi-KNL
Team Members :
Institute for Research in Biomedicine (IRB-Barcelona) – ES
p38α is a serine/threonine kinase that interacts with a large number of substrates through which it participates in various cellular processes, such as inflammatory responses, differentiation, cell death, senescence, and tumor suppression. It comes then as no surprise that the protein has been implicated in several pathological conditions including chronic inflammatory diseases, psoriasis, cancer, and heart and neurodegenerative diseases. Considering also that p38α is essential for normal development, it is important to study its mechanisms of activation – presently, there are four known activation pathways. In the most common one, MAPK kinases dually phosphorylate the activation loop after which the protein undergoes large conformational rearrangements. The second mechanism is specific for T-lymphocytes and involves ZAP-70 protein kinase which phosphorylates Tyr323, a distal site from the activation loop. This event leads to autophosphorylation and subsequent activation of p38α which exhibits different substrate selectivity. The final two activation mechanisms, that also occur through autophosphorylation, are mediated either by the direct interaction with phosphatidylinositol ether lipid analogues or with the TAK1 binding protein 1. In addition to the described mechanisms of activation, two sets of intrinsically active mutants of p38α have also been reported: 1) D176A, D176A+F327L, D176A+F327S mutants, which retain 10-25% of activity relative to the active wild type protein, and 2) Y323T, Y323Q, and Y323R mutants which showed a lower intrinsic activity (1-3%), that is still comparable to the monophosphorylated p38α. In both cases, the X-ray structures have revealed that the largest conformational changes occur in the L16 loop where the mutations disrupt the hydrophobic core formed within several structural elements. However, the exact mechanism of activation is still unknown, in part due to the inability of X-ray crystallography to resolve the highly flexible activation loop. Therefore, we would like to use a computational approach, namely extensive all-atom parallel-tempering metadynamics simulations, to study the changes that occur in the protein upon described mutations. This will allow us to identify the key elements in the activation mechanisms and enable us to clarify the role that the L16 loop, together with specific residues, plays in the autophosphorylation of p38α.
CAMEL – CArdiac MechanoELectrics.
Project Title: CAMEL – CArdiac MechanoELectrics
Project Leader: Edward Vigmond, FR
Resource Awarded: 13000000 core hours on Marconi-KNL
Medical University of Graz – AT
University of Bordeaux / LIRYC Institute – FR
Advances in science continue to provide ever increasing amounts of data and detail. With respect to cardiac physiology, MRI and CT can provide sub 100 micron resolution for large hearts, providing uprecedented anatomical detail. Models of single cell function now incorporate many subsystems, including ion movement, tension development, energetics, and stochastic calcium regulation. These descriptions continue to grow in size as we unravel the working of the single cell. It is these molecular level events which give rise to organ level behaviour which we observe as large mechanical deformations which pump blood. The electrical and mechanical systems of the heart are intertwined and we must understand their interaction if we are to progress in the treatment of heart disease. Cardiac modelling has progressed to the point where it can have direct clinical impact. The most complex human heart simulations use meshes with millions of degrees of freedom, have behaviour at each node described by up to 100 differential equations, and must solve linear and nonlinear systems at each time step. Computational requirements are formidable and have hindered full implementation of all the modelling detail possible. The human heart is the most relevant and useful model, yet its size limits the detail which can be put in. This project will greatly enhance our existing code to decrease runtime of high resolution cardiac electromechanical simulations while increasing the level of complexity and detail used in the model. We will achieve this through several avenues. Strong scalability of our code will be improved through the use of better preconditioners and optimal partitioning. Algorithms which show better stong scalability will be used. We will use our simulator for three goals which are intertwined. The first is to study how electrical activation of the heart through the Purkinje system leads to optimal mechanical response. Working with electrocardiogram data, we will infer cardiac activation. Next, we will consider pathological cases where activation of the heart has been compromised and a pacing device needs to be implanted. We shall determine the ideal placement of leads for cardiac resynchronization therapy. Finally, we shall look at patients with aortic valve disease and aortic coarctation, conditions which both lead to pressure overload in the left vantricle. Using clinical datasets we will compute pressures which can be further exploited for determining blood flow and therapeutic targets.
Genotype imputation of 1 Million patients and controls suffering 44 genetic diseases using large-scale whole-genome sequencing-based reference panels.
Project Title: Genotype imputation of 1 Million patients and controls suffering 44 genetic diseases using large-scale whole-genome sequencing-based reference panels
Project Leader: Josep Mercader, ES
Resource Awarded: 5632000 core hours on MareNostrum
Team Members: Barcelona Supercomputer Center – ES
Despite tremendous investments to identify causal genes for complex genetic diseases, such as diabetes, asthma, and others, through genome-wide association studies (GWAS), for the majority of the diseases, less than 10% of the variance attributable to genetic factors can be explained with the currently identified genetic variants. The goal of this project is to to make use of whole genome sequencing data from the 1000 Genomes project and UK10K Exome sequencing project to gain more information and statistical power in 69 GWAS datasets comprising at least 44 different diseases and more than 1 Million of subjects. This project will also analyze upcoming data from biobanks for which genome-wide genetic data and phenotypic data is becoming available, such as the UKBiobank. This will represent the analysis of around 15.000 billions of genotypes. In order to perform this analysis, supercomputing resources and advanced statistical and systems biology techniques are required. For this, we have developed a Genome-Wide IMPutation work-flow that makes use of COMPSs parallel programming framework, termed GWIMP-COMPSs. This tool will allow the identification of new genetic regions in the genome that increase the risk for several diseases, as well as fine mapping the already known susceptibility risk regions. We will then use manually curated disease specific biological pathways and networks reconstructed by our group. Also using novel in-house developed pathway and network analysis methods we will identify new key biological processes and genes perturbed in subjects suffering from a variety of complex diseases, ranging from metabolic diseases, to psychiatric or autoimmune diseases. We will experimentally validate the findings by replication of the associations in other cohorts and by analysing the functional effect of the discovered variants in independent cohorts for which DNA and tissue banks are available. We expect this project will allow the discovery of novel molecular mechanisms involved in a variety of complex diseases, opening new lines of research in several diseases, and will provide a novel framework to better exploit the existing available genetic data to better characterize the molecular bases of complex diseases.
All-atom simulation of the Amyloid-beta-42 peptide interacting with gold nanoparticles (AmyGoNP).
Project Title: All-atom simulation of the Amyloid-beta-42 peptide interacting with gold nanoparticles (AmyGoNP)
Project Leader: Rosa Di Felice, IT
Resource Awarded: 25000000 core hours on Marconi-KNL
CNR – IT
University of Modena and Reggio Emilia – IT
There are evidences that Alzheimer’s disease (AD) is triggered by cleavage of the amyloid precursor protein, with consequent expulsion of cleavage products from the transmembrane region. These cleavage products, specifically amyloid-β (Aβ) peptides, are prone to fibrillation out of their native environment and originate Alzheimer’s plaques. Nanostructured vessels inserted in the cellular environment are inspected as possible agents to inhibit or control this process to a therapeutic precision. A viable initiation channel for protein fibrillation is the sampling of a modified conformational space that favors elongated peptide structures and aggregation between them. In our previous PRACE project, by means of enhanced-sampling molecular dynamics (MD) simulations with atomic precision, we revealed that the interaction with a solid surface stimulates the formation of elongated structures of the Aβ42 peptide. Along with accumulated evidence that the propensity of the single monomer to adopt fiber-like conformations is directly related to the lag time for fiber formation, we have collected key evidence that contact to a flat substrate, which may be the external surface of the cell membrane or lipoproteins, as well as foreign agents, is pivotal to fibrillation-prone structural deformations. Our results (manuscript attached) also suggest that size-dependent nanoparticles (NPs) can be used to tune fibril formation. We thus now propose to explicitly test this hypothesis by carrying out enhanced sampling MD simulations of Aβ42 on Au NPs of different sizes and decorated by different ligands. Note that the complexity of this work is strictly related to the necessity of using a tailored force field for the specific interface, which our group has been developing in the last ten years, based on ab initio interface electronic structure calculations. In the long-term we plan to address the following substrates: • small NPs, 1.5 nm, with a Au25 core and various functionalizations (amine, carboxyl, PEG and their combinations); • flat surfaces with the same functionalizations, mimicking large AuNPs; • citrate-capped AuNPs of arbitrary size with a tested coat concentration; we focus on the size Au155. For this specific project, we ask core-hours two carry out 2 simulations for the first group and 1 simulation for each of the other two groups, for a total of 4 simulations. Some time in excess of this estimation is requested to include tests on another system with a different NP. Starting from about ten representative structures for A42 in water from our previous PRACE project, obtained by clustering analysis, we will carry out docking simulations on the above listed NPs, to identify preferential adsorption sites and orientations. This task will be performed before the start of this project and will elucidate how the peptide/NP interactions are influenced by the size and functionalization of the NP. Guided by the identified docking conformations, during the project we will refine the structures by MD and few T-REMD simulations over a limited temperature range. We anticipate that the results of this work will eventually test the hypothesis that NPs of small size can be used to prevent amyloid fibrillation, which will have an unprecedented impact in AD treatment.
MEMDYN – Finite-size effects on the dynamics of lipid membranes in simulations.
Project Title: MEMDYN – Finite-size effects on the dynamics of lipid membranes in simulations
Project Leader: Jürgen Köfinger, DE
Resource Awarded: 12000000 core hours on MareNostrum
Team Members: Max Planck Institute of Biophysics – DE
The function of living cells depends crucially on their membranes, which are highly dynamic quasi two-dimensional assemblies of a huge variation of lipids and membrane proteins. Recent advances in experimental methods have opened up new possibilities for directly observing diffusion of membrane proteins and even single lipids in crowded membranes. On the computational side, comparable time scales and length scales can be reached by coarse-grained molecular dynamics (MD) simulations. Usually, MD simulations are performed in cubic boxes with periodic boundary conditions in order to minimize surface effects in exchange for the much less critical finite-size effects induced by the periodicity. In MD simulations of a lipid membrane, the periodic box is usually chosen to be flat in order to reduce the number of water particles and thus the computational cost of the simulation. This box geometry as well as the confined geometry of the membrane lead to a special type of geometry dependence of dynamical properties of the membrane. According to recent findings (Camley et al., 2015), the diffusion of lipids in the membrane is subject to strong finite-size effects caused by hydrodynamic interactions with the periodic images mediated by the surrounding water layer. Essentially all lipid membrane simulations apply periodic boundary conditions and thus they are all subject to these finite-size effects. Consequently, we have to develop methods to account for these effects when comparing dynamical properties obtained from simulations with experiments. Our proposed research is designed to confirm these theoretical predictions in large-scale simulations of lipid membranes using a coarse-grained model. Moreover, we have developed an approximation, which allows to easily correct for these finite-size effects, which we will validate in simulation of heterogeneous lipid membranes containing proteins. The proposed project will lead to a better understanding of the transition between three-dimensional and two-dimensional diffusion, it will shed light on general questions of the dynamical properties of lipid bilayers, facilitate the comparison of simulations results with experimental data, and pave the way for the large-scale simulations of biological membranes.
Chemical Sciences and Materials (6)
Emerging solar cell materials.
Project Title: Emerging solar cell materials
Project Leader: Clas Persson, NO
Resource Awarded: 15994400 core hours on MareNostrum
Humboldt-Universität zu Berlin – DE
University of Oslo – NO
Royal Institute of Technology – SE
University College London (UCL) – UK
The reducing oil resources and increasing global energy consumption make the development of sustainable energy systems one of the greatest challenges of the 21st century. Taking into account a global interest in the reduction of CO2 level, the development of solar cells is one of the main research priorities. Today, photovoltaics (PV) generate roughly 0.14 TW, which is only about 0.8% of the total energy consumption (~17 TW). To increase the PV capacity to large extent, large area solar cell coverage is needed. This also requires the development of emerging solar cell materials, having high efficiency, low cost, and good long-term stability. In this project, we focus on the further understanding and development of thin film solar cell materials. Since the research groups together involves relatively many researchers, we define and form a project with two connected work packages (WPs): WP1 Explore various high-absorbing Cu-based chalcogenides in order to search for alternative environmentally friendly compounds, with potentially advantageous materials properties. WP2 Explore the properties of organic/inorganic hybrid systems and understand the potential of hybrid materials in solar cell application. The first principles studies of such materials require the calculations for systems containing more than 100 atoms as well as the modelling of complex structures (defect complexes, amorphous solids, etc.). Moreover, since traditional density functional theory (DFT) calculations cannot describe band structures of the materials accurately, we will use hybrid functional and GW calculations. In this project, the combinations of different first principles methods as well as our coding and method development experience will allow us to perform a detailed study of emerging solar cell materials.
AIMODIM – Ab Initio Modelling Of Dislocations In Metals.
Project Title: AIMODIM – Ab Initio Modelling Of Dislocations In Metals
Project Leader: Lisa Ventelon, FR
Resource Awarded: 12012444 core hours on MareNostrum
Team Members :
CEA – FR
Université Claude Bernard Lyon 1 – FR
Université de Lorraine – FR
Linköping University – SE
The expertise of the Service de Recherches de Métallurgie Physique (SRMP) of CEA/Saclay in France is mainly focused on the study of the fundamental processes underpinning the science of nuclear materials. The basic research carried out in this framework involves the multiscale modelling of material properties for fusion applications from ab initio electronic structure calculations of DFT type (Density Functional Theory) and more specifically, the ab initio code applications to large systems, in order to describe defects in materials, be it irradiation defects or dislocations. The present proposal is in keeping with modelling of dislocation glide controlled effects in body-centered cubic (BCC) metals based alloys, which form the basis of an important class of structural materials, going from ferritic steels to refractory alloys. Plasticity is a multiscale phenomenon, in which the macroscopic mechanical properties are highly influenced by atomic-scale details of the glide of dislocations, responsible for the plastic deformation, as well as of their interactions with crystalline lattice and impurities. In BCC metals, the fundamental comprehension of plasticity requires an accurate description of atomic bonding, down to the electronic structure level, notably owing to the presence of a marked pseudogap in their electronic density of states. The purpose of this two-year proposal is to provide a quantitative description of dislocation motion in BCC metals and alloys by using DFT, including the effects of stress and temperature. We demonstrated that it is possible to reproduce and explain experimental variations of shear stress with crystal orientations in pure BCC metals, by using a simple law that describes atomic-scale screw dislocation glide, fully parametrized and validated on DFT calculations. Experimental data, on the other hand, is based on uniaxial tensile tests that necessarily involve non-glide stresses, i.e. stress components that produce no Peach-Koehler force on the dislocation and that are known to strongly influence dislocation glide in BCC crystals. Thus during the first year, we plan to incorporate non-glide effects, by studying the impact of non-shear components of the stress tensor on screw dislocations. Then during the second year, in order to improve our physical understanding of microstructure evolution of materials at high temperatures, we will investigate the temperature dependence of dislocation properties of BCC metals and their alloys focusing on the magnetic contribution. Such DFT calculations are extremely computationally demanding but they would allow for a complete and physical picture of plasticity in BCC metals and alloys, particularly in relation to understanding their brittleness and other mechanical properties critical for nuclear applications. They can serve in turn as input for larger-scale plasticity models such as dislocation dynamics, and as guide for the development of the research area of mechanical properties of refractory metals and alloys.
SMoG – Simulation driven Morphing of supported Graphene.
Project Title: SMoG – Simulation driven Morphing of supported Graphene
Project Leader: Valnetina Tozzini, IT
Resource Awarded: 15000000 core hours on Marconi-KNL
Team Members :
Istituto Nanoscienze del CNR – IT
Scuola Normale Superiore – IT
Exploitation of graphene for applications requires its manipulation at the nano-metric level. For instance, its use in nano-electronics requires creating and controlling the electronic band gap and/or electronic doping, while its use for gas storage requires the creation of 3D net-works. Both applications need the sheet structural (corrugation or defects creation) or chemi-cal manipulation (substitutional doping, chemical adhesion). The focus of this proposal is on the study of the relationships between the morphology of graphene and its application-oriented properties and on the investigation of possible strategies to achieve control of the morphology. By “morphology” we mean the local curvature and defects (structural or substitutional). By “application-oriented properties” we specifically intend physical-chemical reactivity, electronic and electro-mechanical properties. We use a multi-disciplinary approach where Density Functional Theory calculations and simulations receive a feedback from experimental measurements. The efficiency of this multidisciplinary strategy is proven by our previous research, performed with HPC resources (from previous PRACE and national calls). The fidelity of the model system to the real sample is a characterizing element in this proposal. Models must mimic the real system with high accuracy, and include correct symmetries, generally implying large supercells. Here we focus on SiC supported graphene, grown by evaporation of Si from the Si exposed (0001) surface. Due to the mismatch between graphene and SiC lattices, both the buffer carbon layer covalently bond to the substrate and the overlaying graphene sheet are rippled, and the periodicity of the rippling pattern is 13×13 with respect to graphene. Overall, therefore, a realistic supercell including also a sufficient part of the substrate sums up to ~1700 atoms, requiring HPC resources. It was recently shown that this system displays multi-stability between different patterns of corrugation. Our first task in this proposal will be simulating switching between different corrugation patterns by means external conditions, such as electric fields, possibly enhanced by graphene electronic doping. Quasi free standing graphene (QFSG) on SiC can be obtained by detaching the buffer layer through intercalation of H underneath it. In this case, different model systems of 1200-1300 atoms or 300-400 can be considered, depending on the relative orientation between graphene and SiC lattices. In this case, more fundamental studies are needed because the electronic and transport properties of QFSG depend on defects of the H coverage in a way which is not completely clarified. The study of structural and electronic properties of this system is the second task of this proposal, and outcomes are likely to give indications on how to tailor the electronic properties of QFSG acting on the H intercalation. The last part of this proposal addresses the reactivity dependence on morphology. Reactivity will be operatively measured by the barrier height for H2 adsorption/desorption, evaluated on sites of the model systems with different curvature or defectivity, under different environmental conditions. We expect this study will give indications on how to treat graphene in order to locally manipulate its reactivity and therefore drive its chemical functionalization.
HTESO-High Temperature Superconductivity in the iron chalcogenides family: a quantum Monte Carlo study.
Project Title: HTESO-High Temperature Superconductivity in the iron chalcogenides family: a quantum Monte Carlo study
Project Leader: Sandro Sorella, IT
Resource Awarded: 9900000 core hours on Marconi-Broadwell, 44500000 core hours on Marconi-KNL
Team Members :
Université Pierre et Marie Curie – Paris VI – FR
The discovery of the iron-based high-temperature superconductors (FeSCs) in 2008 , after about 20 years from the first discovery of high-temperature superconductivity in the cuprate family, has provided further evidence that a new theoretical framework is unavoidable to understand this effect. Therefore it is extremely important to develop new simulation techniques able to reproduce the present experimental results, with the final and very ambitious goal of predicting, simply by computer simulation, novel superconducting materials for potential commercial applications. In this project we plan to systematically study few materials in the iron chalcognides (Se,Te) family. They show a simpler geometrical structure with respect to the iron pnictides, but at the same time they retain all the interesting properties of other FeSCs; this peculiarity makes chalcogenides a perfect laboratory for studying unconventional superconductivity both theoretically and experimentally. A particular attention will be focused on iron selenide (FeSe). On one hand, we will focus on elucidating the connection between magnetism and superconductivity in bulk FeSe by performing accurate first-principle calculations of the electron pairing function in presence of collinear antiferromagnetism. On the other hand we will consider the FeSe single layer, as it remarkably shows a record critical temperature in the 40-80 K range on SrTiO3 and other polar substrates, whose effect is simulated in our framework by adding an external electric potential. In the last part of the project we plan to study the iron telluride (FeTe), parent compound of FeSe. Despite their structural similarities, they show completely distinct behaviors, since FeTe is characterized by a long-range antiferromagnetic order at ambient conditions and no superconductivity appears . We aim at unraveling the physical reasons behind this difference, and the evolution of the magnetism and electronic pairing in FeSe(x)Te(1-x) for values of x (1/4, 1/2, 3/4), spanning the whole phase diagram. The interplay between structural, magnetic, and electronic properties in FeSe and FeTe, fully accessible by QMC, can give important insights on the iron chalcogenides physics and in particular on their exotic behavior under pressure, doping, chemical substitution, and mono- or few-layer growth. We have recently developed in our code a widely used technique to reduce the size effects, that is based on k-sampling and allows affordable and more reliable calculations with much smaller number of atoms. The generalization of this technique to superconducting wave functions is not straightforward but doable, and if successful it will constitute a milestone on the way to simulating high-temperature superconductors entirely from first principles.
QuMOS – Quantum bits based on silicon Metal-Oxide-Semiconductor transistors.
Project Title: QuMOS – Quantum bits based on silicon Metal-Oxide-Semiconductor transistors
Project Leader: Yann-Michel Niquet, FR
Resource Awarded: 10680000 core hours on Marconi-KNL
Team Members :
CEA – FR
Quantum computing has attracted a lot of interest in the last decade as a way to overcome the limitations of classical computers in a number of problems. It makes use of the wave-like nature of electrons, allowing for new operations based, e.g., on wave superposition and interference. The basic ingredient of a quantum computer is the qubit, a device where the wave functions of electrons – and in particular their spins – can be manipulated coherently in order to represent and process information. The operation of qubits is limited by the lifetime of these waves, whose amplitude and phase can be altered by the interactions with the environment. Achieving long coherence times is therefore a key issue for the development of reliable qubits. In this respect, the availability of isotopically pure 28Si has opened new perspectives for the design of solid state qubits. In this material, which is free of nuclear spin disorder, the lifetime of electron spins can reach a few ms, allowing for complex manipulations. Also, a silicon-based quantum information technology can benefit from the state-of-the-art devices and processes developed for standard CMOS technologies. For example, electrons can be confined and manipulated in the so-called “corner channels” appearing in advanced Trigate/FinFETs silicon transistors. The complete understanding, design and optimization of silicon qubits calls for advanced simulations at the quantum level. The Non-Equilibrium Green’s Functions (NEGF) method is well suited to that purpose. It can describe, in a seamless way, the effects of quantum confinement, disorder and interactions with other particles. The aims of this project are to support the interpretation of the experiments carried out on silicon qubits, to provide guidelines for the design of these devices, and to make progress in the understanding of their physics. We will, in particular, analyze the operation of silicon qubits based on FinFET transistors as a function of the geometry of the devices, and assess the effects of disorder on the reliability of qubits.
How Coordination Pocket and Ligand Type Affect the Water Reduction Mechanism of Cobalt-based Catalysts.
Project Title: How Coordination Pocket and Ligand Type Affect the Water Reduction Mechanism of Cobalt-based Catalysts
Project Leader: Marcella Iannuzzi, CH
Resource Awarded: 10000000 core hours on Marconi-Broadwell
Team Members :
University of Zurich – CH
In the quest of renewable sources of energy one of the most promising solution seems to be the production of molecular hydrogen from water activated by solar energy. Recent studies revealed that molecules with Co active centres might achieve a reasonable efficiency in reducing hydrogen, thus being candidates for the replacement of Pt. Several Co-based complexes such as porphyrin-derived molecules, cobaloximes, and molecules with polypyridyl ligands7 have been tested for hydrogen evolution in homogeneous environment. It has been shown that polypyridyl ligands are superior to cobaloximes in terms of their higher photo-catalytic activity and stability, but display an higher over-potential then the latter. Our goal is to model and investigate by simulations based on density functional theory a few of these molecules in aqueous solution. We believe that the study from first principles of the reduction mechanism is going to help in the design of improved molecular catalysts. Our aim is to understand the effects of the number of the pyridyl subunits, the polypyridyl and bipydiyl ligands, the distortion of the ligands, and the coordination sphere around Co by means of Ab-initio Molecular Dynamics in condensed matter, within the explicit solvent environment.
Earth System Sciences (2)
Climate SPHINX reloaded.
Project Title: Climate SPHINX reloaded
Project Leader: Jost von Hardenberg, IT
Resource Awarded: 10152000 core hours on Marconi-Broadwell
Team Members :
Ecole Normale Supérieure – FR
National Research Council (CNR) – IT
University of Oxford – UK
Both increasing resolution and the use of stochastic parameterization schemes have been recently shown to allow, through a better representation of small-scale variability in climate models, for improvements of many important climate features, such as the Euro-Atlantic weather regimes. The recent Climate SPHINX (Climate Stochastic Physics HIgh resolutioN eXperiments) project was granted 20 million core hours at SuperMUC in the 10th PRACE call. Storage of data produced by is secured by the EUDAT Pilot Project DATA SPHINX (DATA Storage and Preservation of High resolution climate eXperiments), which provides a widely accessible archive for medium term storage to facilitate data access and discovery. In Climate SPHINX the EC-Earth climate model was used to investigate the sensitivity of climate simulations to model resolution and stochastic parameterisations. The experimental framework consisted of one historical and one scenario projection following CMIP5 specifications in AMIP configuration (i.e. atmosphere-only integrations forced with observed – for the past – and simulated – for the future – sea surface temperatures). The AMIP integrations have been carried out keeping constant the vertical resolution and exploring five different horizontal resolutions from low (~120 km to very high (~16 km). Each integration was repeated with the implementation of the stochastic physics and with several ensemble members. A logical complementary follow up of the experiments carried out under the Climate SPHINX umbrella is represented by the inclusion of other components of the climate system in the experimental framework: namely the ocean and the land components. As highlighted in Scaife et al. 2011, the model biases become much more evident in the coupled context compared to the forced atmosphere simulations. On the other hand, there is also increasing evidence that stochastic parameterization schemes are beneficial for climate models where they have been observed to improve modes of internal variability such as El Nino-Southern Oscillation (Christensen et al. 2016). Recent studies have shown significant impacts of stochastic parameterization schemes on climate mean and variability in ocean models (Brankart 2013), ocean model forcing in a coupled atmosphere ocean system (Williams 2012) and sea ice models (Juricke and Jung, 2014). At the University of Oxford, a stochastic parametrisation package has been developed for the ocean and sea ice models in EC-Earth. The schemes represent unresolved sub-grid scale variability through stochastic perturbation of key parameters in the models (Juricke and Jung, 2014). Additionally, Weisheimer et al., 2011 demonstrated the importance of the land surface in the simulation of the summer 2003 European heat wave: simulation of this extreme event was sensitive to uncertainties in land surface processes. For this reason, at the University of Oxford, a stochastic land surface scheme has been developed to represent these uncertainties (MacLeod et al, 2016). In this project we will continue the modelling framework of Climate-SPHINX to explore the role and the impact of stochastic oceanic parameterizations and of a new stochastic land-surface scheme in coupled model integrations, both at standard (70 km/1° Ocean) and at high resolution (40 km/0.25° Ocean) with the EC-Earth Earth-System Model.
PRACE-MesoWake – Unified mesoscale to wind turbine wake downscaling based on an open-source model chain.
Project Title: PRACE-MesoWake – Unified mesoscale to wind turbine wake downscaling based on an open-source model chain
Project Leader: Javier Sanz Rodrigo, ES
Resource Awarded: 17700000 core hours on MareNostrum
Team Members :
Barcelona Supercomputing Centre (BSC-CNS) – ES
National Center for Atmospheric Research (NCAR) – USA
National Renewable Energy Laboratory (NREL) – USA
The overall objective of the PRACE –MesoWake project is to support the development of a virtual research laboratory for high fidelity modelling of atmospheric physics applied to wind energy. A multi-scale approach is proposed to cover all the relevant scales for wind energy applications, from mesoscale processes to microscale terrain and wind turbine wake interaction effects. Better insight into the flow physics has the potential of reducing wind farm energy losses by up to 20% according to the U.S. Department of Energy’s Atmosphere to Electrons (A2e) research initiative. Its European counterpart, the New European Wind Atlas (NEWA) project, aims at reducing uncertainties in wind resource assessment below 10%. These two programs will run in parallel over the next four years to improve our simulation-based design capabilities by improving our understanding of the mesoscale to microscale downscaling process and their impact on wind turbine and wind farm design conditions. The unified model interconnects three domain scales, each one associated to an open-source code, namely: 1) WRF (Weather-Research-Forecast), from of the U.S. National Center for Atmospheric Research (NCAR), to solve by nesting the meteorological downscaling process from global to mesoscale to microscale using Kosović’s LES model; 2) SOWFA (Simulator fOr Wind Farm Applications), from the U.S. National Renewable Energy Laboratory (NREL), LES model based on OpenFOAM CFD framework, to solve the interaction of the ABL and the wind farm at microscale using an actuator-line rotor model; and 3) FAST (Fatigue, Aerodynamics, Stress, and Turbulence) aero-elastic design code developed by NREL, embedded in SOWFA, to couple the wind turbine response with the wind flow aerodynamics. The model-chain allows two approaches for meso-micro coupling: 1) online dynamic coupling using WRF-LES one-way nested into WRF-mesoscale all the way down to microscale, and 2) offline coupling using WRF outputs to drive SOWFA simulations asynchronously. The first approach allows maintaining numerical consistency across scales. In the other hand, the terrain-following uniform grid is not suitable to handle steep terrain or detailed wind turbine simulations. The second approach has all the flexibility of the CFD framework to deal with complex geometries but relies on idealized meteorological conditions (dry atmosphere, no radiation, etc). A precursor simulation is used to generate microscale turbulence in equilibrium with the background mesoscale forcing, something that is also necessary in the WRF-LES subdomains. To speed-up this transition, temperature perturbations at the inflow planes of the microscale domains are used. These methods have been successfully tested in idealized microscale conditions but their usage has to be generalized to more realistic coupling with mesoscale processes. This PRACE project supports, with HPC resources, the production of virtual experiments for meso-micro model development and validation activities, carried out in the frame of the A2e and NEWA programs. A validation test suite for meso-micro models will be developed following the model evaluation framework of the IEA Task 31 Wakebench. This international dissemination forum will be used to leverage data and organize benchmarking activities with other modeling communities.
BurgersHMC – Instantons and Intermittency in Hydrodynamic Turbulence: A Lattice Monte Carlo Approach.
Project Title: BurgersHMC – Instantons and Intermittency in Hydrodynamic Turbulence: A Lattice Monte Carlo Approach
Project Leader: Luca Biferale, IT
Resource Awarded: 18000000 core hours on Marconi-KNL
Team Members :
University of Bern – CH
DESY, Zeuthen – DE
University of Ferrara – IT
University of Rome – IT
The universal scaling behavior of small-scale fluctuations is one of the most intriguing aspects of fully developed turbulence. It is a fundamental problem for out-of-equilirium statistical physics with key connections to applied problems in fluid-mechanics, e.g. small-scale turbulent modeling. The scaling exponents of high order moments of spatial velocity increments strongly deviate from the mean-field Kolmogorov prediction: it remains a fundamental challenge to derive the empirically observed properties from first principles. As a result, numerical simulations are often considered the ultimate “tool” to address this problem and to investigate challenging new questions. It has been argued that intense velocity fluctuations or energy dissipation events might be connected to the existence of instanton solutions, dominating the far tails in the multi-point velocity probability distribution functions. Here we propose a new numerical approach based on the path integral formalism to generate complete space-time histories as single field configurations, which allows us to ask – and eventually to answer – new questions connected to extreme fluctuations in hydrodynamic turbulence. As a first important application, we propose to investigate the case of Burgers turbulence. As for 3D turbulence, also for Burgers equations it is not known how to connect anomalous scaling with the asymptotic instanton-dominated regime. We have recently shown that by using a highly optimized lattice Monte Carlo (MC) technique one can efficiently sample configurations of the Janssen – de Dominicis path-integral approach. This constitutes an important innovation in the computational and algorithmic approaches to classical stochastic PDEs (applicable to a wide class of different problems) and provides us with a new and exciting setting to investigate the statistical properties of instanton configurations with importance sampling methods. For this project we have optimized a fully parallel Hybrid Monte Carlo (HMC) code to perform a set of large-scale simulations to highlight rare and strong events in single realizations of fluid flow by means of MC sampling and conditioning. The important novelty of this approach lies in the fact that we may impose single- or multi-point constraints for the sampled field configurations. This allows us to improve the importance sampling for instanton configurations, which are distinguished by large field gradients. Our goal is to answer those questions that relate to the relevance of instanton configurations for intermittency and anomalous scaling – an idea that is currently being explored with various analytical approaches. While our method is fully complementary to analytical efforts that attempt to solve the classical instanton equations directly, the important difference is that one can go beyond what is currently possible analytically and evaluate the fluctuation contributions around the classical instanton. It is the effect of these fluctuations that are important to understand and assess the relevance of instantons for real-world turbulence.
TMMB: Turbulence modulation by micro-bubbles in shear flows.
Project Title: TMMB: Turbulence modulation by micro-bubbles in shear flows
Project Leader: Carlo Massimo Casciola, IT
Resource Awarded: 41848830 core hours on Marconi-KNL
Team Members :
Sapienza University of Rome – IT
This proposal addresses the numerical simulation of turbulent multiphase flows in the so-called two-way coupling regime. In such regime the carrier flow advects a disperse phase, either light bubbles or inertial particles which, in turns back-react on the fluid leading to a substantial alteration of the carrier phase. Even though this regime is relevant for many industrial and technological applications, its physical understanding is still poor due to the lacking of physically effective models and numerically efficient algorithms which capture the inter-phase momentum exchange. We present a new exact methodology, namely the Exact Regularized Point Particle (ERPP) approach, which is able to capture the momentum exchange between the carrier flow and hundred thousands of micro-bubbles. In the ERPP approach no “ad-hoc” modelling assumptions are made. The disturbance flow produced by the bubbles is amenable of an analytical solution in terms of the unsteady Stokes equations which provides a solid physical ground to the methodology. The new approach is physically effective and numerically efficient. In the limit of very small bubbles (point-bubbles) the methodology overcomes the numerical drawbacks of the classical Particle In Cell method (PIC). Direct Numerical Simulations of a bubble laden homogeneous shear flow are performed to explore at least the two dimensional parameter space in terms of the Stokes number [St=0.01; 0.05; 0.1; 0.2; 0.4] and volume fraction [Phi=0.01; 0.05; 0.1] which maps the turbulence modulation in the limit of small bubbles. Due to the small scale clustering of the disperse phase, the back-reaction of the bubbles on the fluid is concentrated in those spatial regions where the disperse phase accumulates. The forcing on the fluid is found to be active across the entire range of turbulent scales thus the correct evaluation of the momentum exchange between the two phases via the ERPP approach becomes crucial to predict both the dynamics of the bubbles and the alteration of the carrier phase. The analysis is expected to highlight the basic physical mechanisms of turbulence modulation operated by the bubbles and the results are expected to impact on the turbulence models usually adopted in the Reynolds Averaged Navier-Stokes and Large Eddy Simulation techniques. Turbulence modelling is mainly based on the Kolmogorov view of turbulence where fluctuations are pumped at the largest scales and dissipated at the smallest one. In the context of bubble-laden flows such picture becomes highly questionable due to the back-reaction of the disperse phase. Turbulence in presence of a substantial back-reaction will be characterized exploiting the Karman-Howarth equation which descibes the dynamical effects occurring at each scale, e.g. the mechanisms of turbulent kinetic energy production, the energy transfer and the extra energy source/sink associated to the back-reaction of the bubbles on the carrier phase.
SREDIT – Simulations of Radiated Emissions in Densely Integrated Technologies.
Project Title: SREDIT – Simulations of Radiated Emissions in Densely Integrated Technologies
Project Leader: Franco Moglie, IT
Resource Awarded: 12353536 core hours on Marconi-KNL
Team Members :
Universidad de Granada – ES
Universita` Politecnica delle Marche – IT
University of Nottingham – UK
University of Nottingham – UK
University of York – UK
This project is strictly connected to the task 3 (Computationally efficient tools for the simulation of the propagation of the stochastic noise emissions) and to the task 4 (Contribution to the evolution of new and existing EMC standards) of the COST Action IC1407 , that began in April 2015. The COST Action WG1 (Numerical methods for addressing the propagation of stochastic fields) and WG 4 (Guidelines for the formulation of standards). The project brings together researchers in electromagnetic and stochastic computational techniques. Suitable propagation and data reduction tools will be evaluated and compared. The challenge of this part is in quantifying not only incoherent radiators but also semi coherent emitters or devices where large areas may have highly correlated fields. Three dimension simulations of complex sources in complex environments require Tier-0 machines. The participants of COST Action with a background in the parallel computation are submitting this PRACE project. The access on a Tier-0 machine that will be provided by PRACE and the mobility provided by COST will support each other and will give to this research a high probability of success. We will use the same computer code [2-9] of the previous projects “CSSRC – Complete statistical simulation of reverberation chamber”, approved during the PRACE 7th Regular Call for the year 2013- 2014, and “ASOLRC – Advanced simulation of loaded reverberation chambers” approved during the PRACE 9th Regular Call for the year 2014-2015 . The simulation different geometries obtained with a set of stirrer angles in the reverberation chamber computation is changed by different complex sources for the propagation of the stochastic noise emissions. This integration of the code involved only a short part of the global code, and it was done before, requiring less than two weeks. The code is mainly divided in three modules: 1) an electromagnetic time domain solver based on finite difference time domain (FDTD) method; 2) a fast Fourier transform module to obtain the frequency domain behaviour; 3) a statistical module to obtain the reverberation chamber proprieties like uncorrelated stirrer positions, field uniformity and statistics. All the modules was previous optimized for high-performance parallel computers using hybrid method (MPI and OpenMP) and they was used successfully in the previous PRACE projects. The availability of a code that in a unique job solves the electromagnetic analysis in the time domain, performs the fast Fourier transform and evaluates the statistical behaviour of the radiation makes the simulations very appealing. Moreover, the availability of an optimized simulation code will give the results in short time avoiding long measurement campaigns. The connection with the other WGs of the COST project will help the other researchers in the development of new near-field sensors, will provide measurements to be compared with the simulations and will contribute to the evolution of new and existing EMC standards. Not all the participants are mainly working in the EMC topics, they are mathematicians and physicists working on the more general topic of chaotic systems making this project multidisciplinary.
DGXTRA – Discontinuous Galerkin method for the X-LES of TRAnsonic flows.
Project Title: DGXTRA – Discontinuous Galerkin method for the X-LES of TRAnsonic flows
Project Leader: Francesco Bassi, IT
Resource Awarded: 20460000 core hours on Marconi-KNL
Team Members :
Università degli Studi di Bergamo – IT
Università degli Studi di Brescia – IT
Università della Calabria – IT
Università Politecnica delle Marche – IT
A wide range of length and time scales characterizes turbulent flows, where the size of smallest eddies dramatically decreases when the Reynolds number increases. A simulation of all scales, i.e. Direct Numerical Simulation (DNS), is still not feasible for large Reynolds numbers and to make industrial problems affordable some modelling has to be introduced. The Navier-Stokes equations can be averaged in time obtaining the Reynolds averaged Navier-Stokes (RANS) equations, where all scales are modelled by means of a turbulence model. RANS equations can be applied to high Reynolds numbers flows but can be inaccurate in the prediction of some flow features such as massive separations or laminar recirculation bubbles. Large Eddy Simulation (LES) bridges the gap between no (DNS) and full modelling (RANS) by solving the large scales of turbulence and modelling the effect of the smaller scales by adding a subgrid viscosity. However, for high Reynolds numbers, where RANS suffer from modelling limitations and LES is still too expensive, hybrid approaches can be considered. In hybrid RANS-LES models RANS is used close to walls, where LES would be prohibitively costly, while LES is performed in regions of separated flow, where larger eddies can be resolved. Recently we proposed to extend the use of high-order implicit time integration schemes coupled with a high-order DG space discretization to the use of the eXtra-Large Eddy Simulation (X-LES) hybrid model of Kok et al.. This combination of innovative numerical technology and turbulence modelling is implemented in our solver, named MIGALE, and has been successfully applied to complex geometries. The goal of the DGXTRA project is to demonstrate the capability of a very high-order DG solver applied to the hybrid RANS-LES solution of compressible flows with strong discontinuities, by efficiently exploiting the X-LES and state-of-art high-order space and time discretizations. The main objectives of the project are: i) the assessment of the X-LES approach for internal aerodynamic applications at high Reynolds number, characterized by transonic and moderately detached flows; ii) provide reference data for developing and validating models. The interest of the scientific community in high-order codes has been recently confirmed by the funding of the EU Horizon 2020 project TILDA (Towards Industrial LES/DNS in Aeronautics – Paving the Way for Future Accurate CFD) (http://cordis.europa.eu/project/rcn/193362_en.html) grant agreement No.635962. Our research team is part of this ambitious project that also aims to exploit High Performance Computing (HPC) resources to get close to a LES/DNS solution not exceeding a turn-around time of few days. To strengthen the connection between our PRACE proposal and the TILDA project, we will perform our solver assessment on two flow problems belonging to the TILDA test cases repository, i.e, the shock boundary layer interaction on a swept bump and the transonic flow through an axial compressor rotor (NASA Rotor 37).
INTERFACE – Turbulent/non-turbulent interfaces in Newtonian and viscoelastic fluids.
Project Title: INTERFACE – Turbulent/non-turbulent interfaces in Newtonian and viscoelastic fluids
Project Leader: Carlos Silva, PT
Resource Awarded: 11452000 core hours on MareNostrum
Team Members :
Faculdade de Engenharia da Universidade do Porto (FEUP) – PT
Instituto Superior Tecnico – PT
The vast majority of turbulent flows encountered in nature and engineering are characterized by strong inhomogeneities and strong dynamical imbalances. Strong inhomogeneities are ubiquitous near walls or near interfaces separating two flow regions, e.g. a turbulent region and an irrotational region, as in mixing layers, wakes, jets and boundary layers i.e. the turbu-lent/non-turbulent interface (TNTI) . Indeed at these interfaces flow quantities such as the kinetic energy and scalar variance evolve very rapidly, and this determines many of the gov-erning flow features of the turbulent region such as the turbulent entrainment and mixing of scalars. On the other hand, strong dynamical imbalances exist whenever turbulent flows are rapidly evolving due to changing conditions e.g. when coherent vortices originated in the transition to turbulence in free shear flows ’modulate’ the small scale dynamics in the turbu-lent region of the flow, or when the upper and lower layers from planar jets interact . An-other example of strong (small scale) imbalance occurs near the TNTI, where the dissipative small scales of motion are very far from equilibrium . In these situations, which are ubiq-uitous in engineering applications, the viscous dissipation exhibits a very distinct behavior in comparison to that considered in the classical theory . This behavior, which has recently been termed non-equilibrium turbulence, is characterized by an imbalance between the non-linear energy transfer to the small scales (or energy cascade) and the turbulent dissipation . This departure from Kolmogorov’s local equilibrium invalidates any state-of-the-art turbu-lence model either for RANS or LES, since they hinge on this concept. Despite strongly inhomogeneous and non-equilibrium flows are the rule, rather the exception, in engineering and geophysical flows, the vast majority of turbulence research and modelling efforts concern weakly inhomogeneous and quasi-equilibrium turbulent flows . This is due to the fact that strongly inhomogeneous and non- equilibrium flows pose formidable concep-tual problems to the classical theory of turbulence . However, recent developments in the understanding of the underlying mechanisms in turbulent/non-turbulent interfaces [1,2,3,8,9] as well as on the origin of the non-equilibrium turbulence behavior [4, 5, 10] highlight the importance of tackling the underlying mechanisms in these flow regions directly rather than relying on the (incorrect) belief that they can be considered as extensions or small departures to the classical theory of turbulence. The same issues occur in the prediction of the mixing and dispersion of a passive scalar by a turbulent flow [2,11]. Another situation where the energy cascade mechanism is very different from the classical picture arises in complex fluids, such as in viscoelastic fluids based on polymeric and surfac-tant additives, which have industrial applications in flows of smart thermal fluids in district heating and cooling systems , in pipeline oil transport , and in oil and gas well-drilling because duct flows of such fluids may show significant reductions in friction and heat transfer and this has motivated extensive research . However, very few details are known on the behaviour of these fluids in shear free flows, and in the vicinity of sharp interfaces, or in strongly imbalance turbulence in particular. In addition for viscoelastic fluids the modern predictive techniques are almost exclusively based on Reynolds-averaged Navier-Stokes equations (RANS) calibrated for wall flows . However, it is well known that these models present inherent limitations  and that large-eddy simulations (LES), which have been increasingly employed in engineering applications, should be used. However the possibility of using LES for non-Newtonian viscoelastic fluids remains elusive, as attested by the extremely small number of LES studies in such flows . Arguably, the most important feature that any subgrid-scale model must represent is the kinetic energy flux from the resolved (or grid-scales), into the unresolved (or subgrid) scales . In Newtonian fluids the development of subgrid-scale models is largely based on the Richardson-Kolmogorov energy cascade concept, according to which the smallest scales of motion are essentially isotropic, statistically universal and dynamically passive, simply dissipating whatever kinetic energy is injected into them from the large scales. This simple concept, which describes well turbulent Newtonian flows away from solid walls, explains much of the success of classical SGS models . Again, a key ingredient of this picture is the so-called equilibrium assumption, which states that all the kinetic energy transferred from the large into the smallest scales of motion is balanced (on average) by the viscous dissipation acting on the smallest scales. However, when dealing with non-Newtonian fluids one is immediately confronted with a much more challenging situation, because of the extremely complex SGS interactions arising there. Fluid elasticity is associated with the deformability of some of the constituents of a fluid and its interaction with the turbulent structures significantly changes the energy distribution across scales. The present understanding of the physical mechanisms of these interactions is very limited, and even the most solid facts about the dynamics of the small scales are not observed in non-Newtonian inertio-elastic turbulence. For example, in a dilute solution of long polymer molecules, an important fraction of the kinetic energy transferred into the small scales is dissipated by the polymer [17,18], but in some cases one may observe the formation a polymer induced energy cascade that competes with the classical energy cascade [17,19]. Thus the present research project focuses on strong inhomogeneities (such as in sharp fluid interfaces ) and strong dynamical imbalances , which are ubiquitous in turbulent flows, and complex fluids, where the most basic turbulence theory assumptions, upon which virtual-ly all turbulence models are based, do not hold. The project is based on direct numerical sim-ulations (DNS), where all the relevant scales of motion are explicitly simulated from the large energy containing scales of motion associated with the kinetic energy of the flow, down to the smallest scale motions associated with the energy dissipation . When feasible DNS is virtually unsurpassed by any other technique and is equivalent to an experiment, and moreo-ver permits the access to the full unsteady, three-dimensional, multiscale complexity of any flow quantities of interest. We chose two simple (and relevant) flow configurations, involving TNTIs where the typical inhomogeneous and imbalance flow features we want to address are present: a) evolving TNTIs of shear free turbulence (SFT), and b) TNTIs from turbulent planar jets. The two configurations present sharp turbulent/non-turbulent interfaces , associated with strong inhomogeneities, and fast developing turbulence dynamics i.e. intense dynamical imbalance [1,4,20]. An innovative aspect of this proposal consists in analysing these flows/flow regions using the Kolmogorov and Yaglom equations [7,21], which allow the full characterisation of the inertial-dissipative multiscale interactions for the velocity and scalar fields, respectively. These equations, often used to analyze homogeneous flows, have recently been extended to and have been employed in the analysis of inhomogeneous flows , and are impressive by the powerful description of the turbulence dynamics they provide, simultaneously in the physical space and in the space of scales.
Sheared turbulent thermal convection.
Project Title: Sheared turbulent thermal convection
Project Leader: Roberto Verzicco, IT
Resource Awarded: 82925000 core hours on Marconi-KNL
Team Members :
Rayleigh-Benard (RB) flow, the flow in a box heated from below and cooled from above, is one of the paradigmatic systems in fluid dynamics. Next to pipe, channel, and Taylor- Couette flow the RB system is used to test various new concepts in fields such as instabilities, nonlinear dynamics and chaos, pattern formation, or turbulence, on which we focus here. RB flow is a relevant model for many phenomena such as thermal convection in the atmosphere , in the oceans (including thermohaline convection), in buildings, in process technology, and in metal-production processes. For ’not too strong’ driving, there is a ’reasonable’ understanding of turbulent RB convection. Here, ’not too strong’ means that while the bulk of the flow is turbulent the boundary layers (BLs) still have laminar-type characteristics and scalings and ’reasonable’ means that one can predict the global transfer properties of the flow. However, this so-called ’classical’ regime breaks down once the driving is strong enough that the BLs become turbulent too. In the last few years, experimental researchers have been able to realise the ultimate regime for RB turbulence, but not yet in Direct Numerical Simulations (DNS), therefore hindering a full understanding of this regime and the transition towards it. The objective of this project is to numerically reach the ultimate regime of RB turbulence by shearing the flow, i.e., by imposing mechanical driving in addition to the thermal one, thus lowering the onset Rayleigh number for the ultimate regime. In this way we will achieve mixed convection, which can be tuned at will, with the thermal and mechanical driving strengths as independent parameters. In this way we want to better understand the onset of ultimate turbulence and how the flow organises itself in the ultimate regime.
Mathematics and Computer Science (0)
Universe Sciences (4)
CIMS – Convection in massive stars: towards a better understanding of stellar evolution.
Project Title: CIMS – Convection in massive stars: towards a better understanding of stellar evolution
Project Leader: Cyril Georgy, CH
Resource Awarded: 13300000 core hours on MareNostrum
Team Members :
Max-Planck-Institut for Astrophysics – DE
Keele University – UK
University of Arizona – USA
Massive stars play a key role in the evolution of the Universe. Through their winds and explosion, they are major contributors to the chemical enrichment of the Universe, and are responsible for most of the energy and momentum injection in galaxies. Their cataclysmic end can also trigger stellar formation, producing the next generation of stars. It is thus of prime importance to have a precise knowledge of massive stars. Traditionally, stars are modelled by means of 1-dimension (1d) codes that solve the set of so-called `stellar structure equations’, assuming stars have a perfect spherical symmetry. Physical processes, such as turbulence, rotation, magnetic fields, which are by nature highly multi-dimensional (multi-d), are implemented assuming simplifying hypothesis. The most important turbulent process taking place in stellar interiors is convection. It is usually included in stellar evolution codes in the framework of the mixing-length theory (MLT, Böhm-Vitense, 1958), associated with criteria to determine the boundaries of the convective region. Recently, several studies have shown that the way convection is implemented in 1d codes has huge effect on the modelling of stellar evolution, and highlight the weaknesses of current implementation, and the need to go beyond MLT. During the past ten years, the development of computing and storage facilities, and the development of parallelised algorithms, have allowed the emergence of modern and efficient hydrodynamics, multi-d codes dedicated to the study of the physics of stellar interiors. Based on first principle physics, they do not rely on the simplifying assumptions of 1d codes and therefore can test and improve theoretical prescriptions implemented in 1d codes. The main goal of this project is to perform a high resolution simulation of convection associated with shell carbon-burning during the late stage of stellar evolution. It will be useful to determine the properties of the bottom boundary of the convective zone, that is so far unresolved in our lower resolution simulations. This high resolution simulations are only feasible on tier-0 HPC facilities with several tens of thousands of cores. Another objective of our project is to simulate for the first time a neon-burning convective shell in 3d. The interplay between burning and convection will be analysed during neon burning (involving photodisintegration) and compared to carbon and oxygen burning (involving fusion reactions). This work will be the starting point of future higher resolution simulations for this shell. The simulations undertaken will be analysed by, and integrated into the work of our internationally leading collaboration (supported by the EU ERC SHYNE project) studying convection in various phases of stellar evolution. This project will thus improve stellar evolution models, which represent a crucial theoretical framework of interpretation for latest observational surveys undertaken at European telescopes (e.g. VLT FLAMES, etc.) as well as with the recently launched EU Gaia satellite, and address key questions raised in the Astronet roadmap.
Ab initio 3D MHD simulations of solar and stellar coronae.
Project Title: Ab initio 3D MHD simulations of solar and stellar coronae
Project Leader: Klaus Galsgaard, DK
Resource Awarded: 60000000 core hours on Marconi-KNL
Team Members :
University of Copenhagen – DK
In the solar corona temperature increases to several millions degrees, while the surface is significantly cooler. The coronal heating mechanism has been a long-standing unsolved problem. The purpose of this project is to obtain details of the coronal heating in the Sun, and investigate how it scales with basic stellar parameters. High-resolution 3D magnetohydrodynamic (MHD) simulations are able to reproduce observed properties with an unprecedented accuracy. Until recently the inclusion of magnetic fields in such simulations displayed shortcomings. For the first time large scale, deep-seated, self-consistent magnetic field configurations have been achieved with the Stagger-code, a 3D MHD code, including a realistic equation of state and detailed, non-gray radiative transfer. Magnetic field lines are advected from the convective motions to the photosphere, leading to the emergence of coronal flux tubes. We have obtained a considerable breakthrough with our ‘ab initio’ simulation of a solar active region from first principles, which will enable us to give a distinct answer to the coronal heating problem (Nature publication in prep.). Lorentz work is performed by the convective motions in the photosphere, thereby building up magnetic stress, which in turn is released in the coronal loops by magnetic dissipation. We need to further investigate the details and finalize our simulations. To unravel the coronae of other types of stars we will simulate also for four additional stars. By employing a novel particle-in-cell code we will also investigate the driving conditions for charged particle acceleration on the basis of our realistic corona simulations. This research is currently feasible only at the hosting group, where both types of codes have been developed. The applicant is excellently suited for this project, and he will benefit from the expertise of the host. These pioneering efforts will lead to unique insight into the mechanisms responsible for magnetic activity.
QUANTSTAR – Quantitative Models of a Star Forming Cluster.
Project Title: QUANTSTAR – Quantitative Models of a Star Forming Cluster
Project Leader: Troels Haugboelle, DK
Resource Awarded: 2500000 core hours on Marconi-Broadwell, 19500000 core hours on Marconi-KNL
Team Members :
University of Copenhagen – DK
University of Barcelona – ES
Molecular clouds are made of supersonic, magnetized turbulent cold gas, where energy cascades from large to small scales, generating a roughly self-similar structure down to the gravitationally collapsing scale. This process is a multi-scale phenomenon, and couples the dynamics of molecular clouds at the tens of parsec scale to proto-planetary systems at the astronomical unit scale through the accretion history and magnetic field anchoring. The gravitational collapse channels gas through a disc to the star, in a delicate balance with the environment. Proto-planetary systems consist of dust and gas envelopes, centrifugally supported discs, and powerful outflows, all of which surround a newborn star. Molecules, ices and dust are being destroyed and reformed in a highly dynamical environment, where processes are driven by accretion and the proto-stellar radiation. The new sub-mm facility ALMA is currently providing the first detailed observations of inner discs where planets are harbored. Advanced modeling is crucially important to interpret and make best use of these observations. During the past year, we have built up expertise in complex chemistry, radiative transfer, and dust modeling, leading to the first quantitative models of star forming regions and proto-planetary systems embedded in global models that can be directly compared to ALMA data, and further our understanding of the intricate physics behind star and planet formation. We propose to perform the first ever simulations of a forming cluster of stars in a Giant Molecular Cloud fragment that includes full non-equilibrium H-C-O chemistry and photochemistry. For the large-scale model we will use a simplified density-optical extinction relation, while re-simulations of single stellar objects will be done including ray-tracing radiative energy transfer. We propose to use 22 million core hours on Maconi which, because of several innovative and new techniques, is enough to 1) evolve a stellar cluster in an already existing model of a 40 parsec Giant Molecular Cloud fragment for an additional 500 kyr at 1 astronomical unit resolution where we expect roughly 1000 stars to form, and 2) zoom-in on selected protostars in the cluster for at least 10 kyr with 0.06 astronomical unit resolution and radiative transfer. This proposal goes far beyond what has been done until now, both by our own and competing groups, with a hitherto unrivalled realistic model of a protostellar cluster, including large-scale anchoring, non-equilibrium photochemistry, and extremely high spatial resolution.
P-DIPS – Planet-disc-interaction in evolving protostellar systems.
Project Title: P-DIPS – Planet-disc-interaction in evolving protostellar systems
Project Leader: Oliver Gressel, DK
Resource Awarded: 9032000 core hours on MareNostrum
Team Members :
Universidad Nacional de Córdoba – AR
Leibniz-Institut für Astrophysik Potsdam – DE
The Niels Bohr Institute – DK
Universidad Nacional Autónoma de México – MX
Since ancient times, we have only been aware of the handful of planets that orbit our Sun, yet the last two decades have witnessed a revolution in astrophysics that has finally culminated a few years ago with the Kepler and CoRoT satellite missions. Today, we know of a few thousand exoplanets orbiting other stars. It is now clear that the structure of our Solar System is probably not typical and that exoplanetary systems exhibit an extremely wide diversity of architectures. Understanding how these systems form and evolve is currently one the most active areas of astrophysics. The processes that lead to the formation of planets and dictate their subsequent evolution play a fundamental role in shaping the architecture of planetary systems as we observe them today. Because of this, the original configurations in which planetary systems were formed were likely very different from the planetary architectures that we are now uncovering. Understanding the connection between planetary formation scenarios and recent observations is critical to unravel the implications of this data. In the present paradigm, as planets accrete material from the protoplanetary disc, they are subject to interactions with the disc and possibly with other planets. These interactions are mediated by density waves that can exert torques on the migrating planets, whose semi-major axis evolve in time. It is becoming clear that the effects of this planet-disc interaction depend strongly on the complex physical processes governing the dynamics of the disc. Co-investigator P. Benítez-Llambay demonstrated an important example of this in a recent paper in Nature, where it is shown that modified disc thermodynamics, considering the disc heating by an accreting embryo, can have a strong impact on the torques exerted on the planets.
Fundamental Constituents of Matter (4)
PHHP – Phases of hydrogen at high pressure.
Project Title: PHHP – Phases of hydrogen at high pressure
Project Leader: Carlo Pierleoni, IT
Resource Awarded: 65000000 core hours on Marconi-KNL
Team Members :
CNRS – FR
University of L’Aquila – IT
University of Rome “Sapienza” – IT
University of Illinois at Urbana-Champaign – USA
The physical behavior of hydrogen under strong compression is a fundamental and still unsolved problem of high pressure physics, probably one of the most challenging for experiments and theories . The search for metallization under compression, inspired by a seminal work of Wigner and Huntington , has been one of the driving forces behind developments in high pressure experimental techniques and ab-initio methods. Despite 75 years of intense research, and recent claims of success [3,4], hydrogen metallization at low temperature remains elusive [5-8]. At higher temperature in the liquid phase, metallization has been observed to occur abruptly [9-14]. Theory predicts a first-order phase transition between a molecular insulating fluid and an atomic metallic fluid [14-19] although discrepancies with experiments on the location of the transition line still remain . From the theoretical perspective, the most relevant, still missing, piece of information is which crystalline structure(s) solid hydrogen adopts when increasing pressure and temperature. This is the essential input to determine the location of the various phase lines and their interplay. Experiments have measured solid hydrogen up to 420 GPa detecting at most a semi-metallic state (Phase IV-V above 200K [20-25] and recently phase VI at lower temperatures [26,27]) . There is a general consensus that above ~ 600 GPa hydrogen will be metallic and monoatomic. In between, there have been a number of theoretical conjectures about the crystalline structure(s) of both the diatomic and the monoatomic phases, including the proposition of a low temperature liquid phase separating these two crystalline states . Recent ground state Quantum Monte Carlo calculations have predicted that the C2/c molecular structure of phase III would persist up to 450GPa where it will transform into the Cs-IV monoatomic metallic crystal [28,29]. However nuclear quantum effect are considered in the harmonic approximation only which could be insufficient for hydrogen [29,30]. Moreover the C2/c structure seems to be in contrast with recent IR spectra above 360GPa [26,27]. Therefore there is evidence that we are ab-initio calculations are still missing at least one phase (VI) before reaching the metallic phase. We intend to investigate various aspects of the non-metal-to-metal transition in high pressure hydrogen by ab-initio methods. In the fluid phase, our aim is to elucidate the origin of the discrepancies between the CEIMC prediction of the phase line  and data from static compression experiments [11-13]. We want to compute reflectivity and absorption in the insulating molecular fluid prior the transition since these are the experimental probes of the transition. In the solid phase we intend to complete the CEIMC calculations for various crystalline structures of phase III. However the most urgent task will be to generate candidate structures for the new phase VI and investigate their stability by dynamics and free energy methods. Finally a still missing piece of information to complete the phase diagram is the melting line beyond 200GPa and its interplay with the dissociation-metalization line in the fluid phase. This can be obtained by free energy methods with QMC accuracy within Coupled Electron-Ion Monte Carlo.
SIMPHYS: Lattice QCD simulations at the physical point with Nf = 2 + 1 + 1 dynamical flavors.
Project Title: SIMPHYS: Lattice QCD simulations at the physical point with Nf = 2 + 1 + 1 dynamical flavors
Project Leader: Silvano Simula, IT
Resource Awarded: 48000000 core hours on Marconi-KNL
Team Members :
University of Bern – CH
University of Cyprus – CY
DESY – DE
University of Bonn – DE
CNRS – University of Grenoble – FR
University of Rome “Tor Vergata” – IT
University of Southampton – UK
The present project aims at providing the underlying gluon field configurations on which several physical quantities of interest, relevant for the interpretation of ongoing and planned experiments in hadronic and flavor physics (see e.g. Ref. ), can be evaluated accurately on the lattice. The proponents belong to the European Twisted Mass Collaboration (ETMC), that has produced many gauge ensembles including the complete effects of the first two quark generations in the sea, the light u- and d- quarks and the more massive strange and charm quarks  (a setup known as Nf = 2+1+1). Using such Nf = 2+1+1 ensembles and the maximally Wilson twisted-mass fermion action  ETMC has already provided a number of remarkable results for both meson  and baryon  physics. The major systematic error still present in the ETMC simulations at Nf = 2+1+1 comes from the extrapolation of the results, obtained for pion masses above 220 MeV, down to the physical point at 140 MeV. Therefore our goal is to generate gauge ensembles with Nf = 2+1+1 dynamical flavors having pion, K- and D-meson masses close to their physical values. Our project will provide the basis for getting accurate determinations of many physical quantities, which are necessary inputs for ongoing and planned experiments in particle physics, like e.g.: i) the bag parameters of the neutral K-, D- and B-meson oscillations relevant for the study of CP violation in the Standard Model and beyond; ii) the leptonic decay constants of K-, D- and B-mesons relevant for BESIII, BelleII and LHCB experiments; iii) the semileptonic form factors describing the decays of K-, D- and B-mesons required for the determination of the Cabibbo-Kobayashi-Maskawa matrix elements, and relevant for BESIII, BelleII, LHCB and KLOE experiments; iv) the hadronic contributions to the anomalous magnetic moment of the muon relevant for FermiLab and JPARC experiments; v) the determination of the quark content of the nucleon relevant for Dark Matter searches; vi) the accurate determinations of isospin breaking effects relevant for the complete interpretation of the hadron spectra and decay rates, and vii) scattering lengths and phase shifts in multi-meson, meson-baryon and baryon-baryon scattering.
AXTRO – Axion Phenomenology and Topological Properties of Strong Interactions.
Project Title: AXTRO – Axion Phenomenology and Topological Properties of Strong Interactions
Project Leader: Massimo D’Elia, IT
Resource Awarded: 30000000 core hours on Marconi-KNL
Team Members :
Univ. La Sapienza Roma – IT
Sissa, Trieste – IT
University of Pisa – IT
University of Southampton – UK
Axions are among the most interesting candidates for physics beyond the Standard Model. Their existence has been advocated long ago as a solution to the so-called strong-CP problem through the Peccei-Quinn (PQ) mechanism. It was soon realized that the axion could also explain the observed dark matter abundance of the visible Universe. However, a reliable computation of the axion relic density requires a quantitative estimate of the parameters entering the effective potential of the axion field, its mass and self-coupling, as a function of the temperature T. To deal with that, it is essential to compute the non perturbative axion parameters by numerical simulations of QCD at finite temperature. In a recent study we have shown that the dependence of the axion mass on T deviates significantly from what predicted by the Dilute Instanton Gas Approximation, at least for T up to 500 MeV. If this result is confirmed at higher temperatures, it may have a significant impact on axion phenomenology, in particular it results in a shift of the axion dark matter window by almost one order of magnitude with respect to the instanton computation. The softer temperature dependence of the topological susceptibility also changes the onset of the axion oscillations, which would now start at much higher temperatures (T ~ 4 GeV). Certainly one might wonder whether a different power law behavior might set in at temperatures higher than 1 GeV. That claims for a new study which extends the range of explored temperatures at least above 1 GeV, which is the main purpos of this study. As a first improvement, given the higher temperatures, we would like to consider the inclusion of dynamical charm quarks in the numerical simulation. Second, since the axion mass and self coupling are obtained from the topological charge distribution of QCD, the main numerical obstruction is represented by the freezing of the topological modes at the lattice spacings needed to investigate such temperatures (a < 0.05 fm). To overcome this hard problem we will use one of the new methods that have been recently proposed in the literature. In particular we want to test the Metadynamic approach, which has been successfully applied to a similar problem in the CP(N-1) model and that we plan to apply to QCD for the first time; preliminary and promising results are reported in Section 3 of this project. In this way we expect to reach at least temperatures as large as 1 GeV.
DenseQCD – Constraining the structure of the phase diagram of strong interaction matter.
Project Title: DenseQCD – Constraining the structure of the phase diagram of strong interaction matter
Project Leader: Christian Schmidt, DE
Resource Awarded: 3000000 core hours on Marconi-Broadwell, 73500000 core hours on Marconi-KNL
Team Members :
Universitaet Bielefeld – DE
A central goal of the physics program pursued with ultra-relativistic heavy ion beams at hadron colliders such as the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) in the USA and the Large Hadron Collider at CERN, Geneva, is to map out the phase diagram of strong interaction matter. We propose to study properties of the phase diagram of strong interaction matter at non-vanishing baryon chemical potential using high order Taylor expansions of the thermodynamic potential of Quantum Chromodynamics (QCD). We plan to perform numerical calculations of up to 8th order Taylor expansion coefficients of the thermodynamic potential within the framework of lattice regularized QCD. The expansion coefficients shall be used to construct expansions of several cumulants of conserved charge fluctuations that are studied also experimentally in heavy ion collisions. The calculations proposed here shall be performed on lattices of size 48^3×12. Combined with existing data on smaller lattices they will allow to perform continuum extrapolations for these thermodynamic observables.