PRACE Preparatory Access – 34th cut-off evaluation in September 2018

Find below the results of the 34th cut-off evaluation of 3 September 2018 for the PRACE Preparatory Access.

Projects from the following research areas:

 

Electron transport through single-protein junctions

Project Name: Electron transport through single-protein junctions
Project leader: Dr Linda Angela Zotti
Research field: Chemical Sciences and Materials
Resource awarded: 50000 core hours on MareNostrum
Description

The fiels of biomolecular electronics has been considerably growing over the past few years. The hope of employing biomolecules such as proteins, peptides, DNA and bacterial nanowires as active elements in electrical devices has prompted a large number of studies on these systems. In the past, charge transfer in biomolecules was rather analyzed with the aim of understanding biological processes; however, recent interest has been focused on their potential as components in devices. Indeed, together with two other researchers at Universidad Autónoma de Madrid, Juan Carlos Cuevas and Ruben Perez, we are currently organizing a conference on this topic (BIOMOLECTRO, Madrid, August 2018) which is funded by the CECAM and PSI-K networks. Within the plethora of biosystems available in nature, proteins have attracted particular attention because of their redox, optical and chemical-recognition properties. In fact, I am currently leading a project for young researchers funded by the Spanish Ministry which is specifically devoted to the study of electron transport through proteins. Besides their possible employment as active elements in electrical devices, electrical-conductance measurements on proteins can be foreseen to be used in the future as a tool to detect defects in their sequence (which in turn could cause various diseases such as cancer). Furthermore, the interest in the conductance properties of proteins has also been growing thanks to the large amount of conductance measurements which were recently perfomed by STM (Scanning Tunneling Microscopy) on these systems. In particular, we are collaborating with Ismael Diez Perez in Barcelona who is an expert in the field. In a previous publication [1] he claimed that inserting a mutation in a blue-copper azurin caused a dramatic change in the transport mechanism by modifying the gate-dependent behaviour. Following his experiments, we are thus studying the electronic properties of the pristine azurin and selected mutants. We already studied these proteins in their isolated state and found that our DFT calculations can provide a robust description of the electronic properties (a manuscript on our findings is currently in preparation). However, we are now turning to the next challenge, namely the study of these proteins in a junction between two gold electrodes. We will focus on the electronic and transport properties of the whole system and try to shed light on the differences observed in the experiments [1]. [1] Ruiz et al., J.Am.Chem.Soc. 15337, 43 (2017).

top

Lattice study of the Sp(4) gauge theory with dynamical fermions for composite Higgs models

Project Name: Lattice study of the Sp(4) gauge theory with dynamical fermions for composite Higgs models
Project leader: Dr Ed Bennett
Research field: Fundamental Constituents of Matter
Resource awarded: 50000 core hours on MareNostrum
Description

Gauge theories based on the Sp(2N) group have recently been identified as a candidate for a new strong interaction that could break the electroweak symmetry via a composite Higgs. Further, such a theory is also a minimal candidate model for dark matter via the “SIMP Miracle”. In this programme, we are conducting the first detailed, first-principles study of the Sp(2N) family of theories, using the techniques of lattice gauge theory, in order to assess whether it possesses the properties necessary to explain electroweak symmetry breaking and dark matter.

top

Full AMG scaling

Project Name: Full AMG scaling
Project leader: Prof Riccardo Rossi
Research field: Engineering
Resource awarded: 50000 core hours on SuperMUC, 100000 core hours on Piz Daint
Description

The proposed project is about evaluating the scalability of AMGCL – Full AMG capabilities in a MPI context. The new developements extend the Algebraic Multi Grid capabilities that were previously available only in openmp to allow using distributed memory machines. The new developmeent also allow taking advantage of multiple GPUs, with an expected speedup in the range 3-5 wrt CPU-only formulations.

top

High-order methods for large eddy simulation of transonic flows in turbo-machinery

Project Name: High-order methods for large eddy simulation of transonic flows in turbo-machinery
Project leader: Dr Ngoc nguyen
Research field: Engineering
Resource awarded: 50000 core hours on MareNostrum
Description

In order to further advance the understanding of the flow physics in turbomachinery and provide new opportunities for design improvements leading to higher engine efficiency and lower emissions, high-fidelity eddy-resolving simulations are becoming a necessity. This is particularly true for transonic flows in turbomachinery. This project aims at demonstrating the competitiveness and efficiency of a high-order CFD code based on the hybridized discontinuous Galerkin method for LES predictions of complex turbulent flows in turbomachinery. The allocated computational resources will provide us with a unique opportunity to advance the state-of-the-art of CFD simulations in turbomachinery by assessing our LES approach for complex turbomachinery configurations at realistic Reynolds numbers using both CPUs and GPUs. Success in our LES computations will provide a strong case to accelerate investments in transformative high-fidelity simulations.

top

Scalability Testing of Reinforcement Learning Connection Proving

Project Name: Scalability Testing of Reinforcement Learning Connection Proving
Project leader: Prof. Cezary Kaliszyk
Research field: Mathematics and Computer Sciences
Resource awarded: 100000 core hours on Piz Daint
Description

The increasing complexity of reasoning problems in mathematics and computer science poses a significant challenge for formal proofs. Recently, I have shown that approaches that combine automated deduction with machine learning significantly help on smaller datasets of tens of thousands of proofs [1]. Such applications of machine learning to theorem proving have supported the automated formalization of the renowned proof of the Kepler conjecture, as well as in various proofs about computer systems. In this small project, we will evaluate the performance of connection tableau provers based on classical first-order logic augmented with machine learning methods on a large scale HPC environment. Our provers combine Monte-Carlo proof search [2] with internal guidance of policy [3] and value in the style of Alpha Zero but applied to theorem proving [4]. The results of the experiments will serve as a basis for an application for larger-scale project. In that project, I will aim to apply a complete reinforcement learning with multiple rounds and hyper-parameter tuning on various formal proof corpora. [1] S. Loos, G. Irving, C. Szegedy, and C. Kaliszyk. Deep Network Guided Proof Search. International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2017) [2] M. Färber, C. Kaliszyk, J. Urban. Monte Carlo Tableau Proof Search. 26th International Conference on Automated Deduction (CADE 2017) [3] Cezary Kaliszyk, Josef Urban. Fairly Efficient Machine Learning Connection Prover. International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2015). [4] C. Kaliszyk, J. Urban, H. Michalewski, M. Olsák. Reinforcement Learning of Theorem Proving. Arxiv 1805.07563 (2018).

top

Many-Body Expanded Full Configuration Interaction

Project Name: Many-Body Expanded Full Configuration Interaction
Project leader: Dr Janus Eriksen
Research field: Chemical Sciences and Materials
Resource awarded: 100000 core hours on Hazel Hen
Description

Owing to the fact that all facets of chemistry are ultimately governed by the set of laws that define quantum mechanics, physical chemical quantities such as, for instance, reaction energies, molecular properties, equilibrium structures, etc., may in principle be decoded directly from theory. Nevertheless, the key equation in the field of quantum chemistry, the time-independent electronic Schrödinger equation, becomes too complex to be solved analytically for any system involving more than one electron, despite its deceptively innocent appearance. For this reason, approximations to the exact, so-called FCI wave function are requisites needed in order to extract any kind of information from the Schrödinger equation. Leveraged by technical advances as well as a large volume of novel techniques, the past decade has seen the ideas of approximately solving the electronic Schrödinger equation by sampling the Hilbert space in inventive and focused rather than complete manners gain renewed momentum. As elusive as it might sound, FCI-level calculations for diverse types and sizes of molecular systems are becoming increasingly feasible these days, and the particular niche of theoretical chemistry to which such work relates is thus gathering a proportionally increased amount of attention as a direct consequence. In the present project proposal, we will work to optimize the parallel scalability of one such recent addition to this growing body of near-exact methods, MBE-FCI, for a sample of weakly and strongly correlated closed- and open-shell species. In MBE-FCI, the exact correlation energy within a given one-electron basis set is decomposed by means of a many-body expansion in the individual virtual orbitals of a preceding mean-field calculation. Facilitated by a screening protocol to ensure convergence onto the FCI target, and aided as well as accelerated by intermediate base models, results of sub-kJ/mol accuracy have recently been obtained on commodity hardware (J. Phys. Chem. Lett, 8, 4633 (2017) & ArXiv:1807.01328) in standard correlation-consistent basis sets of double-, triple, and quadruple-zeta quality. Despite offering a significantly more expensive alternative than so-called selected CI counterparts, the embarrassingly parallel nature of the MBE-FCI algorithm holds promise of providing a tractable route towards FCI-level results for larger basis sets as well. In the present project proposal, besides making optimizations to our code in preparation for the upcoming PRACE project access call 18, we intend to carry out thermochemical calculations of atomization energies which require MBE-FCI results within basis sets of unprecedented sizes.

top

Multi-scale simulations of membrane proteins

Project Name: Multi-scale simulations of membrane proteinss
Project leader: Dr. Himanshu Khandelia
Research field: Biochemistry, Bioinformatics and Life sciences
Resource awarded: 100000 core hours on Hezel Hen, 100000 core hours on Piz Daint,
Description

Ionic gradients of K+, Na+ and Cl- across cellular membranes are key to cellular survival. The transmembrane transport of ions is carried out either by passive ion channels (which facilitate passive downhill transport along the electrochemical gradient) or by active ion pumps (which facilitate uphill transport by spending cellular energy). The transport of ions across the cellular membrane occurs on very fast time scales (nanoseconds) through proteins pores which can be less than a nanometer in diameter. Moreover, the proteins reside in a highly dynamic and heterogenous environment of the lipid membrane which contains a variety of lipid species which can directly or indirectly influence the functions of these proteins. Using classical computer simulations which integrate Newton’s second law numerically, one can probe the dynamics and molecular mechanisms of ion transport across such proteins, which is difficult to probe using classical experimental techniques. Knowledge of the molecular mechanism also provides a fundamental rational basis for drug development to target numerous diseases linked to the dysfunction of such channels, in particular diseases linked to the neurological system. Some bacteria, however, employ unique molecular machines, which integrate functional units of both ion channels (ICs) and ion pumps (IPs) to achieve K+ uptake in low K+ environments. The goal of this project is to unravel the molecular mechanism by which this unusual protein complex transports K+ across the bacterial membrane using computer simulations. KdpFABC dulls the clear distinction often made between ICs and IPs in biology, and knowledge of its transport mechanism is key to comprehending how and why proteins transporters evolved into two, rather than three major branches.

top

Attosecond dynamics of radiosensitiser molecules: scaling tests of a non-adiabatic molecular dynamics approach

Project Name: Attosecond dynamics of radiosensitiser molecules: scaling tests of a non-adiabatic molecular dynamics approach
Project leader: Dr Daniel Dundas
Research field: Fundamental Physics
Resource awarded: 100000 core hours on Hazel Hen
Description

A series of scaling tests will be carried out for a massively parallel computer code called EDAMAME (Ehrenfest DynAMics on Adaptive MEshes). This code solves the Kohn-Sham equations of time-dependent density functional theory to describe complex molecules irradiated by intense, short-duration laser pulses. These tests are a precursor to studying attosecond dynamics in nucleobases and radiosensitiser molecules. There are five canonical nucleobases: adenine (A), cytoside (C), guanine (G), thymine (T) and uracil (U). Four of these nucleobases (ACGT) occur in DNA while another four (ACGU) occur in RNA. As well as these nucleobases another important class of biological molecules are 5-halouracils, formed when a halogen atom, such as fluorine (5-FU) or bromine (5-BrU), replaces the C5 hydrogen in uracil. These are called radiosensitisers and can be used to replace thymine in DNA. When exposed to radiation such as ions or photons, lethal damage to DNA containing these radiosensitisers is magnified and so they find therapeutic use in destroying cancerous cells. Our goal is to study the interaction of attosecond laser pulses with these molecules in order to further understand the onset of radiation damage in DNA by radiosensitiser molecules. Attoscience is the study of dynamical processes occurring in matter on the attosecond timescale (1 attosecond = 0.000000000000000001 seconds). This corresponds to the natural timescale for electronic processes and so attoscience has seen spectacular progress over the last several years. In particular, one of the main goals of attosecond chemistry is steering chemical reactions through correlated electronic processes. Realising this goal is key to the development of future ultrafast technologies: examples include the design of electronic devices, probes and sensors, biological repair and signalling processes and development of optically-driven ultrafast electronics. The scaling tests proposed here will allow accurate and efficient calculations to be carried out. Two of the most widely used tools in attoscience are high harmonic spectroscopy and photoelectron spectroscopy. The results produced from such calculations will give important insights into the dynamical processes that occur as the molecules fragment and will act as powerful benchmarks for experimental work.

top

Modelling the mechano-chemical interactions of fluid interfaces

Project Name: Modelling the mechano-chemical interactions of fluid interfaces
Project leader: Dr. Daniel Santos-Olivan
Research field: Biochemistry, Bioinformatics and Life sciences
Resource awarded: 50000 core hours on MareNostrum
Description

Fluid interfaces are ubiquitous in cell and tissue biology. For instance, lipid bilayers are fluid surfaces that separate the interior of the cell from the outer medium creating a physical and chemical barrier and withstands forces coming both from the inside and outside of the cell. Moreover, the inner membranes of organelles such as the endoplasmic reticulum, mitochondria or the Golgi apparatus are also formed by lipid bilayers. The shape of these systems is predominantly dominated by curvature elasticity, whereas they behave as a viscous, almost inextensible fluid in-plane. This mechanical duality provides at the same time structural stability and adaptability, allowing membranes to build relatively stable structures that can nevertheless undergo dynamic shape transformations, such as those required in vesicular trafficking, cell motility and migration, mechano adaptation to stretch and protein diffusion. These transformations can require very large deformations, such as during cell division. Another instance of a fluid surface relevant to the mechanics of the cell is the actomyosin cortex. The cortex is formed by a network of actin filaments undergoing a dynamic remodeling and in which myosin motors generate active tension. The mechanical activity of the cortex is essential for processes such as cell migration or cytokinesis. At the tissue scale, epithelial monolayers are sheets of cells lining important organs, such as the lungs or intestines that can be model as continuum active fluids that can provide a good understanding of important biological processes as morphogenesis. The rheology of epithelial tissues is regulated to a large extent by the underlying actomyosin cortex in cells. Thus, a physically meaningful model of epithelial tissues requires a multiscale approach that incorporates the mechanics of the cortex in each individual cell. In our group, we model these different fluid surfaces using continuum models following Onsager’s variational principle, which brings together all the mechano-chemical interactions in these biological systems in a non-linear and thermodynamically consistent way. We solve numerically the resulting partial differential equations by means of finite element simulations. The differential equations coupling the in-plane physics with the elasticity of the surface involve higher order partial differential equations that usually are stiff and difficult to integrate in time. They require that the basis functions that we use for discretizing our problem using the Finite Element Method to be at least H2 functions, square-integrable functions whose first- and second-order derivatives are also square-integrable. This makes usual Lagrange interpolation not applicable and therefore we are required to use smooth approximants, such as subdivision surfaces or b-splines. In addition, the need to track efficiently the surface movement, its shape changes and its interaction with the bulk fluid make this problem computationally very challenging.

top

ATLAS production and simulation jobs running on HPC facilities

Project Name: ATLAS production and simulation jobs running on HPC facilities
Project leader: Dr Santiago Gonzalez de la Hoz
Research field: Fundamental Physics
Resource awarded: 50000 core hours on MareNostrum
Description

We need CPUs to run simulations of the proton-proton collision events in our detector, and supercomputers have spare CPUs. The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specifc codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2022. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data.

top

Modelling Extreme sea states in the Atlantic Ocean using a spectral wave model

Project Name: Modelling Extreme sea states in the Atlantic Ocean using a spectral wave model
Project leader: Dr. Sonia Ponce de Leon Alvarez
Research field: Earth System Sciences
Resource awarded: 50000 core hours on MareNostrum
Description

This project aims at the calibration of the Third-Generation spectral wave model WAM (Gunther and Berenhs 2012; Weisse and Günther 2007; WAMDI Group 1988; Komen et al., 1994) and understanding it’s scalability properties in realistic oceanographic applications. The WAM model is the root of all the spectral wave models existing so far, such as the WAVEWATCH-III (Tolman 2014) and the SWAN models (The SWAN Team, 2014). Due to the global warming, the occurrence of the extreme events such hurricanes is increasing and as direct consequence big waves or the so called rogue waves are also increasing with destructive effects on ships and coasts. This project intends to shed light in the distribution of the most important indicator of occurrence of rogue waves, the Benjamin-Feir-Index (BFI) (Benjamin-Feir 1967). Spectral wave models use the BFI and the kurtosis to predict rogue waves (Janssen 2003); Ponce de León and Guedes Soares 2014). These two quantities rely on the four-wave nonlinear interactions term. The final objective of this project is to improve the current scientific knowledge of the extreme sea states. The scalability study will focus in the entire Atlantic Ocean. The following approaches are to be adopted: a) Use HPC since we are speaking about high resolution in time, space and from the spectral point of view for 10 years (2008-2018). b) Gather scalability data in this preparatory step to allow a better understanding of the resources necessary to have accurate high resolution hindcasts of rogue sea states. The final objective of this project is to understand how the BFI was distributed in the past 50-year period and what correlation exists with climatic indicators such as North Atlantic Oscillation (NAO). This will allow to advance considerably the current knowledge in wave physics and the modeling of the extreme events and rogue waves.

top

Monte Carlo Simulation for the ATLAS Experiment at the CERN LHC

Project Name: Monte Carlo Simulation for the ATLAS Experiment at the CERN LHC
Project leader: Dr. Andres Pacheco Pages
Research field: Fundamental Physics
Resource awarded: 50000 core hours on MareNostrum
Description

The ATLAS experiment at the CERN Large Hadron Collider is currently collecting large sample of data. The rate is expected to increase significantly in the future. The analysis of the data relies on the precise simulation of the proton-proton collisions, as well as a detailed simulation of the response of the large high-granularity ATLAS detector. Large samples of simulated data are required for precise analysis capable of separating small signals from large backgrounds. In this project, we will install the ATLAS mandatory software packages and produce test samples relevant for analysis carried out by the IFAE ATLAS group. We will test that the connection between BSC and IFAE’s Worldwide LHC Computing Grid infrastructure for ATLAS is operative for software download, database configuration and upload of the results.

top

Numerical models tests for the geodynamic evolution of the Iberian microplate using Underworld-II and I3VIS

Project Name: Numerical models tests for the geodynamic evolution of the Iberian microplate using Underworld-II and I3VIS
Project leader: Mr Angel Valverde
Research field: Earth System Sciences
Resource awarded: 50000 core hours on MareNostrum
Description

It is known that numerical models require high computing capacity, speed of calculation, and resource efficiency. Some of the characteristics that consumes the computer resources can be controlled by users. We can control directly through the input file if we wat a 2D or a 3D model, mesh resolution and number of particles per cell, order of elements, physical complexity, etc. Same code behaves in many different ways depending on which computer is being implemented, and this need to be tested in order to understand how it works better and to gain time to focus in physical/geologycal problem instead of technical issues. To get an idea of the machine setup to run the code, we want to run some numerical geodynamic models. We build simple geodynamic models in 2D representig a lithosphere transect among the entire Iberian Peninsula from north to south, a simple 3D model of Iberia 65 million years ago to study the general geodynamic evolution of the entire plate. We would also like to run a set of 3D models to study the geodynamic evolution and the tearing of the lithosphereic slab in the Gibraltar Arc, in relation to the Messinian Salinity Crisis event. We will execute each model in a different parallel configuration. In addition, measure the wall time of a model (fixed resolution) against a different number of CPUs.

top

Dendritic cells receptor sorting to lipid domains.

Project Name: Dendritic cells receptor sorting to lipid domains
Project leader: Prof. Rainer Böckmann
Research field: Biochemistry, Bioinformatics and Life sciences
Resource awarded: 100000 core hours on Piz Dant
Description

activation is postulated to be regulated by their spatial organization on the plasma membrane. In particular, the signal transduction by the receptors was hypothesized to be regulated by their association with liquid-ordered phase-separated domains enriched in specific plasma membrane lipids and proteins. Exploiting this concept, we are investigating whether an important class of DC receptors, the so-called C-type Lectin Receptors (CLRs), are modulated in their diverse immune response by coupling to different lipids domains. To this aim, we propose to use extensive all-atom molecular dynamics simulations and umbrella sampling calculations to study the molecular interactions between three model CLR transmembrane domains, specifically DCIR, DEC205 and MMR peptides, and heterogeneous model membranes with coexisting liquid-ordered and liquid-disordered lipid domains. We expect to provide a molecular view on the lateral sorting of this important receptor class and unprecedented insight into the underlying driving forces mediating receptor compartmentalization.

top

Realistic Simulations of the Solar Chromosphere

Project Name: Realistic Simulations of the Solar Chromosphere
Project leader: Dr Damien Przybylski
Research field: Universe Sciences
Resource awarded: 50000 core hours on SuperMUC
Description

The solar chromosphere, lying between the photosphere and the transition to the corona, is one of the least under-stood parts of the Sun. This has to do with the difficulty in observing it , the chomosphere is dynamic on timescales of 10 seconds, the assumptions of local thermal equilibrium does not apply, and the plasma is expected to be out of statistical equilibrium, all of which makes the interpretation of the observations extremely difficult.. The mechanisms behind the heating of the chromosphere and coronal are not yet well understood. In this project we aim to understand the physics of the various chromospheric processes and phenomena resulting from solar magnetic fields. We will perform radiation magnetohydrodynamic (MHD) simulations including as much of the relevant physics as possible. This includes a study of any reconnection driven events that appear in the simulation, such as jets and impulsive heating. Additionally we will be able to study the dynamics behind observed physical phenomena, such as waves, shocks and spicules. These high-resolution detailed simulations are required to directly compare results with observations made with the latest generation of solar telescopes such as the 4m DKIST and the Sunrise balloon-borne observatory.

top

 

Type B: Code development and optimization by the applicant (without PRACE support) (2)

Scalability of PlasmaC – an AMR code for fluid plasmas in complex geometries

Project Name: Scalability of PlasmaC – an AMR code for fluid plasmas in complex geometries
Project leader: Dr Robert Marskar
Research field: Fundamental Physics
Resource awarded: 100000 core hours on SuperMUC
Description

Embedded boundary finite volume discretization are used to numerically solve the minimal fluid plasma model with extensions to insulating surfaces. Structured adaptive mesh refinement is used to resolve high-gradient space charge layers, even in three spatial dimensions.

top

Investigation of the Scalability of the Weather Research and Forecasting Solar Model

Project Name: Investigation of the Scalability of the Weather Research and Forecasting Solar Model
Project leader: Dr Jacob Finkenrath
Research field: Mathematics and Computer Sciences
Resource awarded: 100000 core hours on Marconi – Broadwell, 200000 core hours on Marconi – KNL, 100000 core hours on MareNostrum
Description

This project will evaluate the scalability of the Weather Research and Forecast model (WRF) on PRACE HPC resources, so as to prepare a broad range of climate and forecast applications towards exascale. The project is part of Work Package (WP) 7 of the current Implementation Phase (5IP) of PRACE, and is being carried out with researchers participating in the Energy Oriented Centre of Excellence (EoCoE) Within the project, we will identify bottlenecks in WRF(solar) that may hinder its scalability, and therefore its readiness for Exascale, on PRACE Tier-0 machines. We will use tools including Scalasca/Score-P and Darshan to measure the work-load of the WRFsolar functions, in particular I/O to storage and the scalability over thousands of cores. Moreover, the different architectures available to us allow for comparisons that we will use to evaluate the readiness of WRF for Exascale computing.

top

 

Type C: Code development with support from experts from PRACE (2)

HOVE2 Higher-Order finite-Volume unstructured code Enhancement for compressible turbulent flows

Project Name: HOVE2 Higher-Order finite-Volume unstructured code Enhancement for compressible turbulent flows
Project leader: Dr Panagiotis Tsoutsanis
Research field: Engineering
Resource awarded: 250000 core hours on Hazel Hen
Description

Unstructured grids are widely used in science and engineering for representing complicated geometries in an accurate and efficient manner. The arbitrariness of the grid itself poses a series of challenges in terms of memory footprint and patterns, as well as development of numerical methods and computing algorithms when aiming for high accuracy and computational efficiency. Previous developments of the UCNS3D CFD code under a PRACE type C project associated with optimisation of the implementation of very high-order numerical schemes for unstructured meshes resulted in an 8.5 speedup. This was achieved by restructuring some of the computational intensive algorithms, employing linear algebra libraries and combining the state-of-the-art parallel frameworks of MPI and OPENMP. These developments have been applied to LES simulations of canonical flows and RANS simulations of full aircraft geometries during take-off and landing. This project aims to enable extremely large scale simulations by focusing on the mesh partitioning algorithms and the I/O of the UCNS3D CFD code in order to perform ILES simulations on unstructured meshes in the scale of billion cells with very high-order finite volume methods. This is going to enable us to improve our understanding of the aerodynamic performance of complicated geometries and therefore enhance their efficiency.

top

HPC Performance improvements for OpenFOAM linear solvers

Project Name: HPC Performance improvements for OpenFOAM linear solvers
Project leader: Dr Mark Olesen
Research field: Engineering
Resource awarded: 100000 core hours on Marconi – Broadwell, 200000 core hours on Marconi – KNL
Description

This proposal supports the Strategic Research Agenda (SRA) proposed by the Euopean Technology Platform for High-Performance Computing. Project partners comprise experts in algorithm development, a release authority, a super computing centre and hardware manufacturer, who have already undertaken SRA objectives in HPC for public releases of OpenFOAM. OpenFOAM is an Open Source toolkit for Computational Fluid Dynamics (CFD) first released by OpenCFD Ltd (the OpenFOAM Trade Mark owner) in 2004, with continuous Quality Assured six-monthly releases via www.openfoam.com. OpenFOAM is foremost a platform to build applications and includes a large variety of standard solvers that cover a wide range of physics, where parallelism is fundamental to their operation. Most applications solve their system of equations using the built-in core linear solvers. From previous works [1-2] a known bottleneck is the scalability of the linear solvers, restricting parallel usage to the order of a few thousand cores. However, technological trends of exascale HPC are moving towards the use of millions of cores [3]. A full enabling for massively parallel clusters requires assessment of: * the limit in the parallelism paradigm (Pstream Library) [4] * the sub-optimal sparse matrix Lower-Diagonal-Upper (LDU) storage format does not enable any cache-blocking mechanism (SIMD, vectorization) [1-4] * the I/O data storage system [5] Improvements already made in parallel communication [7] and I/O [5] compensate for some of the above; nevertheless, significant improvements are still required to exploit the best combinations of computer architecture and software algorithms. The Principal Investigator of this project is OpenCFD Ltd. Tier-0 Super Computing Centre CINECA provides the hardware and integrated systems on which to test performance improvements, detailed below. The HPC Application Department of Intel will actively contribute to the project. The project comprises two objectives to benefit scalability: * to define and create interfaces to external linear solver packages such as PETSc and HYPRE and benchmark the performance gains, and * to examine the effects of lower-level core changes within the matrix structures. It is anticipated that changing from the LDU to Compact-Storage-Row (CSR) structure will benefit the solver performance. The long-term aim of this activity is to work with the HPC’s community to foster a transition to petascale, making an efficient use of the different HPC hardware. References [1] M. Culpo, Current Bottlenecks in the Scalability of OpenFOAM on Massively Parallel Clusters, PRACE White Paper, August 2012. [2] HPC Enabling of OpenFOAMfor CFD applications, 6-8 April 2016, Casalecchio di Reno, Bologna, Italy [3] https://www.top500.org/lists/2018/06/ [4] I. Spisso, G. Amati, V. Ruggiero, C. Fiorina: Porting, optimization and bottleneck of OpenFOAM in KNL environment, Intel eXtreme Performance Users Group (IXPUG) Europe Meeting, CINECA, Casalecchio di Reno, 5-7 March 2018 [5] K. Meredith, M. Olsen, N. Podhorszki: Integrating ADIOS into OpenFOAM for Disk I/O, 4th Annual OpenFOAM User Conference 2016, Cologne, Germany. [6] https://www.openfoam.com/releases/openfoam-v1806/post-processing.php [7] Pham Van Phuc, Shimizu Corporation, Fujitsu Limited, Riken: Large Scale Transient CFD Simulations for Building using OpenFOAM on a World’s Top-class Supercomputer, 4th Annual OpenFOAM User Conference 2016, Cologne, Germany. [8] https://github.com/OpenFOAM/OpenFOAM-Intel.

top

 

Type D: Optimisation work on a PRACE Tier-1 (5)

HPC-MetaShot: a multi-node implementation for metagenomics analysis

Project Name: HPC-MetaShot: a multi-node implementation for metagenomics analysis
Project leader: Prof. Giovanni Chillemi
Research field: Biochemistry, Bioinformatics and Life sciences
Resource awarded: 150000 core hours on Tier-1
Description

Metagenomics analysis deals with the characterization of the microorganisms living in environmental samples. In particular, shotgun metagenomics by high-throughput sequencing of the total genetic material contained in the investigated samples allows the deep and accurate characterization of total microbiomes, including bacteria, viruses, protists and fungi. In the recent years, the metagenomic approaches allowed to investigate the microbiome colonizing different types of habitat from the ocean to the acid mine drainage, with a particular interest to the community living on the human body and widely referred as human microbiome. The investigation of the human microbiome allowed for example to partially discover the tight relationship between the physiological host process and the bugs colonizing the gut, as the digestion and the training of the immune adaptive system. Moreover, we also learned how in same cases these microorganisms became dangerous or how the microbial communities shape changes during chronic inflammatory diseases, like the IBD. However, the analysis of such sequencing data is still extremely challenging in terms of both overall accuracy in the taxonomical and functional profiling and computational efficiency. Shotgun metagenomics, in fact, is particularly useful for the identification of unculturable microorganisms and a potential field of application is the investigation of infective pathologies. Particularly interesting is the investigation of the micro-environment in which infective agents acts and/or the interactions with other players, especially in case of co-infections (bacteria and virus, for example). This approach does not require any prior knowledge of the target agents and allows to analyze the whole collection of genomes present in a sample, overcoming the principal limitations of the classical tools for pathogen detection, that are target-oriented. MetaShot is a bioinformatic workflow designed for assessing the total microbiome composition from host-associated shotgun sequence data, by using sequencing data produced by Illumina platforms, that showed overall optimal accuracy performance (10.1093/bioinformatics/btx036). It is designed to analyse both genomic and transcriptomic data. In MetaShot all the step required for the analysis of NGS data are embedded in a single Python pipeline managing the execution of external tools and the parallelization of the processes. Currently, the tool design forces its application to the analysis of few samples at a time, principally due to limits in the implemented parallelization scheme. Our goals are to develop a HPC version of MetaShot, capable of carrying out the analysis of tens/hundreds of samples in few hours; the extension of the software to other kind of Next Generation Sequencing (NGS) data reads such as the ones produced by the Ion Torrent sequencer. The new HPC-MetaShot software will be applied to around a hundred of NGS samples of clinical interest produced by the National Institute for Infectious Diseases “Lazzaro Spallanzani” (INMI) in Rome (Italy)..

top

Extending the scalability and parallelization of SEDITRANS code

Project Name: Extending the scalability and parallelization of SEDITRANS code
Project leader: Dr. Guillermo Oyarzun
Research field: Engineering
Resource awarded: 150000 core hours on Tier-1
Description

.

top

Automation of high fidelity CFD analysis for aircraft design and optimization

Project Name: Automation of high fidelity CFD analysis for aircraft design and optimization
Project leader: Dr Mengmeng Zhang
Research field: Engineering
Resource awarded: 150000 core hours on Tier-1
Description

 

top

Scalable Delft3D FM for efficient modelling of shallow water and transport processes

Project Name: Scalable Delft3D FM for efficient modelling of shallow water and transport processes
Project leader: Dr Menno Genseberger
Research field: Earth System Sciences
Resource awarded: 150000 core hours on Tier-1
Description

.

top

Radiative Transfer Forward Modelling of Solar Observations with ALMA

Project Name: Radiative Transfer Forward Modelling of Solar Observations with ALMA
Project leader: Dr. Sven Wedemeyer
Research field: Universe Sciences
Resource awarded: 150000 core hours on Tier-1
Description

.

top