PRACEdays14 Presentations

Plenary Session

Tuesday 20 May


In silico exploration of the most extreme scenarios in astrophysics and in the laboratory:
from gamma ray bursters to ultra intense lasers

Luís O. Silva is Professor of Physics at Instituto Superior Técnico, Lisbon, Portugal, where
he leads the Group for Lasers and Plasmas. He obtained his degrees (MSc 1992, PhD
1997 and Habilitation 2005) from IST. He was a post-doctoral researcher at the University
of California Los Angeles from 1997 to 2001. His scientific contributions are focused in the
interaction of intense beams of particles and lasers with plasmas, from a fundamental point
of view and towards their applications for secondary sources for biology and medicine.

Luís O. Silva has authored more than 150 papers in refereed journals and three patents, and has given invited
talks at the major plasma physics conferences and served on the program and selection committees
of conferences and prizes in Europe, US and Japan. He is a member of the International Scientific Advisory
Board of ELI – Beamlines, of the Scientific Steering Committee of PRACE, and of the National Council for
Science and Technology (reporting to the Prime Minister of Portugal). He has supervised 6 PhD students
and 7 post-doctoral fellows whose work has led to several national and international prizes. He was PI in
more than 20 projects funded by the Portuguese Science Foundation, ESA and EU, in EU supercomputing
projects, by NVIDIA, and the Rutherford Appleton Laboratory. He was awarded an Advanced Grant from the
European Research Council in 2010, being the youngest in “Fundamental Constituents of Matter” and one of
the youngest scientists overall to be awarded an Advanced Grant.

He was awarded the 2011 Scientific Prize of the Technical University of Lisbon, the IBM Scientific Prize 2003,
the 2001 Abdus Salam ICTP Medal for Excellence in Nonlinear Plasma Physics by a Young Researcher, and
the Gulbenkian Prize for Young Researchers in 1994. He was elected Fellow of the American Physical Society
and to the Global Young Academy in 2009.

PDF - 40.4 Mb

Download presentation

 

I will describe how massively parallel simulations are advancing our understanding of extreme scenarios
where ultra intense flows of particles and light, in the laboratory and in astrophysics, combined with nonlinear
relativistic effects define the complex evolution of the system. After presenting the algorithms describing the
collective dynamics of charged particles in intense fields that allows for the use of the largest supercomputers
in the World, I will cover recent progresses in relativistic shocks and cosmic ray acceleration in extreme
astrophysical events, advanced plasma based accelerators for intense x-ray sources, and novel ion
acceleration mechanisms for cancer therapy and fusion energy. I will show how petaflop scale simulations,
combined with unique astronomical observatories and the emergence of multi PetaWatt laser systems, are
triggering and opening novel exciting opportunities for innovation and new avenues for scientific discovery.

ETP4HPC – European Technology Platform for HPC

Jean-François Lavignon joined Bull in 1998, where he is in charge of collaborative R&D.

At Bull, he has been involved in research strategy and developing emerging businesses. Before joining Bull he served in several positions related to IT research. He has experience in
parallel computing, computer architecture and signal and image processing. Jean-François
Lavignon graduated from Ecole Polytechnique in 1984 and ENSTA (Ecole Nationale des
Techniques Avancées) in 1986. He then spent one year at Stanford University as invited researcher.
He is now the Chairman of ETP4HPC, the European Technology Platform for HPC.

 

ETP4HPC, the European Technology Platform (ETP) for High-Performance Computing (HPC) (www.etp4hpc.eu) is an organisation led by European HPC Technology providers with an objective to build a competitive
HPC value chain in Europe. ETP4HPC also included HPC research centres and end-users. It has issues
a Strategic Research Agenda (SRA) which outlines the research priorities of European HPC on its way to
achieve Exascale capabilities within the Horizon 2020 Programme. ETP4HPC is also one of the partners
of the Contractual Public-Partnership (cPPP) for HPC (together with the European Commission) the aim of
which is building a competitive HPC Eco-system in Europe based on the provision of Technologies, Infrastructure
and Applications.

ETP4HPC intends to play a key role in the coordination of the European HPC Eco-system. Our intention is
to form a project team that will respond to the Commission’s FETHPC-2 -2014(Part A) Call on that topic.

The objective of this parallel session it to:

- Outline the assumptions and suggestions of the SRA

- Explain the concept of the cPPP and how it will affect the European HPC arena

- Discuss the preparations for the Coordination of the HPC strategy call as above.

PDF - 1.3 Mb

Download presentation

PRACE and HPC Centers of Excellence working in synergy

Sergi Girona is Chair of the Board of Directors of PRACE, as well as Director of the Operations
Department of the Barcelona Supercomputing Center (BSC). He belongs to the BoD
of PRACE since its creation in 2010, and currently is both its Chair and Managing Director.

He holds a PhD in Computer Science from the Technical University of Catalunya. In 2001,
EASi Engineering was founded and Sergi became the Director of the company for Spain,
and the R&D Director for the German headquarters.

In 2004, he joined BSC for the installation of MareNostrum in Barcelona. MareNostrum was the largest
supercomputer in Europe at that time, and it maintained this position for 3 years. Sergi was responsible for
the site preparation and the coordination with IBM for the system installation. Currently, he is managing the
Operations group with the responsibilities for User Support and System Administration of the different HPC
systems at BSC.

Leonardo Flores Añover, European Commission

Jean-François Lavignon joined Bull in 1998, where he is in charge of collaborative R&D.

At Bull, he has been involved in research strategy and developing emerging businesses. Before joining Bull he served in several positions related to IT research. He has experience in
parallel computing, computer architecture and signal and image processing. Jean-François
Lavignon graduated from Ecole Polytechnique in 1984 and ENSTA (Ecole Nationale des
Techniques Avancées) in 1986. He then spent one year at Stanford University as invited researcher.
He is now the Chairman of ETP4HPC, the European Technology Platform for HPC.

 

Under the Work Programme 2014 – 2015 of the new Horizon 2020 EU Research and Innovation programme,
the European Commission launched Call EINFRA 5-2015, entitled “Centers of Excellence for Computing
Applications”.

This Call invites the establishment of a limited number of Centers of Excellence (CoE) to ensure EU competitiveness in the application of HPC for addressing scientific, industrial or societal challenges.
PRACE will co-operate with the HPC CoE, finding synergies in the efforts of both parties, including the identification
of suitable applications for co-design initiatives relevant to the development of HPC technologies.

This session will present and explain Call EINFRA-5-2015 and open the floor to participants to identify and
bring forward the services and possible synergies required.

PDF - 703.2 kb

Download presentation – S. Girona

PDF - 441.2 kb

Download presentation – L. Flores Añover


Wednesday 21 May – 09:00 to 12:30


Opening and Welcome

She studied Economics with a major in Business Administration at the University of Zaragoza,
graduating in 1982. She is a member of the Senior State Corps of Sales Engineers and
Economists since 1989 and of the State Corps of Business Graduates since 1984.
She has been for years a Member of the Board of Directors of several business companies such as the Centre for Development of the Industrial Technology (CDTI), the Institute for Diversification and Energy Savings
(IDEA), the Spanish Company of Reinforcement (CERSA) and the Large Telescope of the Canary Island
(GRANTECAN). At the same time, she has served as Chair of the Ministry of Science and Technology in the
Board of Directors of the Spanish Agency for Consumer Affairs, Food Safety and Nutrition, and as member
of the Advisory Board of the Spanish Medicine Agency.

Moreover, she has been speaker of different conferences, courses and seminars and she is the author of
various articles.

She graduated from ENSIMAG, Ecole Nationale Supérieure d’Informatique et de Mathématiques
Appliquées of Grenoble in 1983.

In 1983, she joined the French Institute of Petroleum (IFP) located at Rueil Malmaison.

In 1996, she was appointed as Deputy Manager of the Exploration Production Business Unit.

In 2001, she joined as CEO Tech’Advantage, a service company and a subsidiary of IFP.

Currently and since 2007, she holds a position of CEO of GENCI (Grand Equipement National de Calcul Intensif)
in charge of the coordination of the national academic high performance computing facilities.

In June 2012, Catherine Riviere was appointed as Council Chair of PRACE Aisbl (Partnership for Advanced
Computing in Europe) which links 25 countries, and in which France is represented by GENCI.

PDF - 1.7 Mb

Download presentation – C. Rivière

PDF - 53.7 kb

Download presentation – M. L. Poncela

Building an Ecosystem to Accelerate Data-Driven Innovation

Dr. Francine Berman is the Edward P. Hamilton Distinguished Professor in Computer
Science at Rensselaer Polytechnic Institute. She is a Fellow of the Association of Computing
Machinery (ACM) and a Fellow of the IEEE. In 2009, Dr. Berman was the inaugural recipient of
the ACM/IEEE-CS Ken Kennedy Award for “influential leadership in the design, development,
and deployment of national-scale cyberinfrastructure.”

Prior to joining Rensselaer, Dr. Berman was the High Performance Computing Endowed Chair
in the Jacobs School of Engineering at UC San Diego. From 2001 to 2009, Dr. Berman served
as Director of the San Diego Supercomputer Center (SDSC) where she led a staff of 250+ interdisciplinary
scientists, engineers, and technologists. From 2009 to 2012, she served as Vice President for Research at
Rensselaer Polytechnic Institute, stepping down in 2012 to lead U.S. participation in the Research Data Alliance
(RDA), an emerging international organization created to accelerate global data sharing and exchange.
Dr. Berman is co-Chair of the inaugural leadership Council of the RDA and Chair of RDA/United States.

Dr. Berman currently serves as co-Chair of the National Academies Board on Research Data and Information,
as Vice-Chair of the Anita Borg Institute Board of Trustees, and is a member of the National Science
Foundation CISE Advisory Board. From 2007-2010, she served as co-Chair of the US-UK Blue Ribbon Task
Force for Sustainable Digital Preservation and Access. For her accomplishments, leadership, and vision, Dr.
Berman was recognized by the Library of Congress as a “Digital Preservation Pioneer”, as one of the top
women in technology by BusinessWeek and Newsweek, and as one of the top technologists by IEEE Spectrum.

 

Digital data has transformed the world as we know it, creating a paradigm shift from information-poor to information-
rich that impacts nearly every area of modern life. Nowhere is this more apparent than in the research
community. Today, digital data from high performance computers, scientific instruments, sensors, audio and
video, social network communications and many other sources are driving our ability to discover, innovate,
and understand the world around us.

In order to best utilize this data, an ecosystem of technical, social and human infrastructure is needed to
support digital research data now and in the future. In this talk, we discuss the opportunities and challenges
for the stewardship and support of the digital data needed to drive research and innovation in today’s world.

PDF - 3.7 Mb

Download presentation

Drive safe, green and smart: HPC-Applications for sustainable mobility.

Alexander F. Walser is managing director at the Automotive Simulation Center Stuttgart e.V.
– asc(s. He received a diploma in Civil Engineering in the subject area modelling and simulation
methods from University of Stuttgart in 2011. After completing his studies he worked on
research projects in the field of structural mechanics, crashworthiness, shape and topology
optimization. Since 2013 he is responsible for acquiring and managing HPC-projects and
new research fields at the asc(s.

 

The automotive industry is facing the challenge of sustainable mobility. This is a demanding task characterized
by fulfilling legal safety requirements globally increasing, improving fuel economy, reducing CO2, noise
emissions and pollutants just as increasing consumer demands. In recent years numerical simulation made
its way in the design phase of automotive development and production as a useful tool for faster problem
analysis and reduction of cost and product development time. High Performance Computing (HPC) is significant
in the automotive industry for competitiveness and innovation. HPC is used in areas where high-performance
computing power is needed to solve computationally intensive problems e.g. computational fluid dynamics
(external aerodynamics, coolant flow or in-cylinder combustion) and dynamic finite element analysis
(crashworthiness and occupant safety simulation). New aspects such as cloud computing or big and smart
data will increase the research and innovation challenges of HPC for the automotive industry. Optimizing pro
cess chains, closing methodical gaps and increasing forecast quality, the cooperation between science and
industry through sustainable partnerships in the industrial pre-competitive collaborative research is needed.
Pioneering cooperation between science and industry, the Automotive Simulation Center Stuttgart – asc(s –
was founded in 2008. The asc(s business model is based on the Competence Network principle. With its 23
members (OEMs, ISVs, IHVs, research facilities and natural members) the asc(s is a transfer platform setting
trends for the interaction of science and industry in Europe. The asc(s offers an environment to develop
new software applications, scalable algorithms and tools to make HPC systems easy-to-use and to make
researchers highly innovative and productive. Linking specific practical projects with the numerical basic research
ensures a rapid economic availability of research results with high quality and provides new impulses
for product development.

PDF - 3.7 Mb

Download presentation

European HPC strategy

Dr. Augusto Burgueño Arjona is currently Head of Unit “eInfrastructure” at European
Commission Directorate General for Communications Networks, Content and Technology
and General manager and coach at Coach Mundi ASBL.
Previously he served as Head of Unit “Finance” Directorate General for Communications
Networks, Content and Technology at European Commission and Head of inter-Directorate
General Task Force IT Planning Office at European Commission.

 

With its communication on HPC of February 2012, the Commission committed to an ambitious action plan for
European leadership in HPC. In May 2013, the Council invited the Commission to develop and elaborate its
plans for HPC and to explore all possible support for academic and industrial research and innovation under

Horizon 2020. Since then, the first calls of Horizon 2020 have been launched and the HPC Public-Private

Partnership with ETP4HPC has been formally launched. There is however much more work ahead of us. In
my presentation I will delineate the expected contributions of all HPC stakeholders to make Europe Union’s
vision on HPC a reality.

PDF - 622.6 kb

Download presentation


Computer Science

Wednesday 21 May – 13:30 to 15:30


Large Scale Graph Analytics Pipeline

Cristiano Malossi received his B.Sc. in Aerospace Engineering and his M.Sc. in Aeronautical
Engineering from the Politecnico di Milano (Italy) in 2004 and 2007, respectively.

After working one year on computational geology problems in collaboration with ENI (the
main Italian oil and energy company), he moved to Switzerland where in 2012 he got his
Ph.D. in Applied Mathematics from the Swiss Federal Institute of Technology in Lausanne
(EPFL), with a thesis focused on the development of algorithms and mathematical methods
for the numerical simulation of cardiovascular problems.

In July 2013 Cristiano joined IBM Research -Zurich as Postdoctoral Researcher of the
Computational Sciences group in the Mathematical and Computational Science department.
His main research interests include: High Performance Computing, Energy-Aware Algorithms and
Architectures, Numerical Analysis, Computational Fluid Dynamics, Aircraft Design, Computational Geology,
and Cardiovascular Simulations.

Yves Ineichen, IBM Research – Zurich, Rüschlikon, Switzerland

Costas Bekas, IBM Research – Zurich, Rüschlikon, Switzerland

Alessandro Curioni, IBM Research – Zurich, Rüschlikon, Switzerland

 

In recent years, graph analytics has become one of the most important and ubiquitous tools for a wide variety
of research areas and applications. Indeed, modern applications such as ad hoc wireless telecommunication
networks, or social networks, have dramatically increased the number of nodes of the involved graphs, which
now routinely range in the tens of millions and out-reaching to the billions in notable cases.

We developed novel near linear (O(N)) methods for sparse graphs with N nodes estimating:

- the most important nodes in a graph, the subgraph centralities, and

- spectrograms, that is the density of eigenvalues of the adjacency matrix of the graph in a certainunit of space.

The method to compute subgraph centralities employs stochastic estimation and Krylov subspace techniques
to drastically reduce the complexity which, using standard methods, is typically O(N3). This technique
allows to approximate centralities fast, highly scalable and accurately, and thereby opens the way for cen
trality based big data graph analytics that would have been nearly impossible with standard techniques. This
can be employed to identify possible bottlenecks, for example in the European street network with 51 million
nodes in only a couple of minutes on only 16 threads.

Spectrograms are powerful in capturing the essential structure of graphs and provide a natural and human
readable (low dimensional) representation for comparison. How about comparing graphs that are almost
similar? Of course, this is a massive dimensionality reduction, however at the same time the shape of the
spectrogram yields a tremendous wealth of information.

In order to tackle arising big data challenges an efficient utilization of available HPC resources is key. Both
developed methods exhibit an efficient parallelization on multiple hierarchical levels. For example, computing
the spectrogram can be parallelized on three levels: bins and matrix-vector products can be computed
independently, and the each matrix-vector product can be computed in parallel. The combination of a highly
scalable implementation and algorithmic improvements enable us to tackle big data analytics problems that
are nearly impossible to solve with standard techniques.

A broad spectrum of applications in industrial and societal challenges can profit from fast graph analytics, for
example routing and explorative visualization. We continuously focus our efforts to extend the coverage of our
massively parallel graph analytics software stack to a variety of application domains in science and industry.

PDF - 962.4 kb

Download presentation

Big models simulations and optimization through HPC. An effective way of improving
performances for cloud targeted services.

Gino Perna is HPC and It Manager at Enginsoft. He obtained his MSc degree in Civil
Engineering at Padua University in 1986. In addition to his duties at Enginsoft he has teaching
duties at the University of Trento, mostly focused on Computer Programming. His knowledge
goes through Mechanical & CFD simulations with last few years spent in the field of HPC
and CAE.

Alberto Bassanese, Enginsoft, Italy

Stefano Odorizzi, Enginsoft, Italy

Carlo Janna, M3E, Italy

 

Woven fabric composites have been object of several researches investigating their mechanical
properties since their introduction in the aeronautic and industrial applications more than twenty
years ago: their good conformability makes them the material of choice for complex geometries.

Fatigue problems are very complicated because fibers are bundled in yarns that are interlaced
to form a specific pattern so the complex geometry of the fabric architecture strongly affects
which one of the constituents fails first and the way a local failure propagates up to cause the
final failure of the entire lamina. By dealing with the problem just in terms of mean (macro)
stresses at laminate level as if the material was homogeneous anisotropic, it is not possible toembrace the stress concentrations and the intra-laminar shear stresses within each component.

Multi-Scale analysis approaches are therefore the obvious way to link macroscopic and microscopic
structural behaviours of composite materials. However, numerous are the parameters controlling
the final composite mechanical properties. These parameters are typically the fiber architecture and
the volume fraction, the mechanical properties of the fiber, the matrix and the fiber-matrix interface.

FEA and continuously enhanced hardware performances, nowadays hardware’s multi-core architectures havebeen offering a convenient solution to the problem of modelling by accounting for their inherently multi-scalestructural nature to the point that Virtual Prototyping can nowadays almost replace some of the physical tests
required for the mechanical characterization of different material systems.

To solve the problem and perform optimization of the whole structure a great number of computational
cores are required but one of the main obstacles are performances in mechanical analysis, that should be
removed to try to perform at the same level as CFD codes.

New conjugate gradient techniques are very promising in those scenarios to cut down considerably
computational time thus leaving space for more analyses and optimization studies to maximizeperformances and design better and safe products.

PDF - 3.2 Mb

Download presentation

Mont-Blanc – Engaging Industry in low-energy HPC technology design process

Alex Ramirez is a Tenured Associated Professor at Universitat Politecnica de Catalunya,
and Computer Architecture Research Manager at Barcelona Supercomputing Center where
he leads the the Mont-Blanc EU project, targeting Exascale HPC systems, and developing
the first HPC cluster prototypes built on low-power ARM processors.

He has graduated 10 PhD students, and published over 150 papers in international conferences and journals (H index 24) in compiler optimizations, processor microarchitecture,
multicore architecture, multithreading processors, parallelizations strategies, and energy-efficient cluster computing.

He has participated as Principal Investigator in 11 European projects, and 7 industrially funded projects. His
research has been featured in The Wall Street Journal, Wired, Financial Times, Scientific Computing, Scientific
Computing World, HPC Wire, Slashdot, ComputerWorld, and others. In 2010 he received the first Award
to a Young Researcher by the Spanish Academy of Engineering.

Marcin Ostasz graduated from the Technical University of Budapest at the Faculty of Electronics
with an MSc degree and he also holds a Master of Business Administration (MBA)
degree awarded by Oxford Brookes University in the UK. Marcin has over 13 years of combined
experience gained at various technical, project management, operations management,
business analysis and process improvement positions with organisations such as
Nokia, American Power Conversion, Dell, GE and Barclays Bank. Marcin is currently working
at Barcelona Supercomputing Centre as a business analyst. His tasks include supporting
projects and organisations such as PRACE, the European Technology Platform, EUDAT and
Mont-Blanc. He specialises in managing industrial relations, road-mapping, workshop management
and business analysis.

 

The aim of the Mont-Blanc project has been to design a new type of computer architecture capable of setting
future global High-Performance Computing (HPC) standards, built from energy efficient solutions used in
embedded and mobile devices. This will help address the Grand Challenge of energy consumption and environment
protection, as well as potentially help Europe achieve leadership in world-class HPC technologies
and satisfy the European industry’s need for low-power HPC.

The project has been in operation since Oct 2011. The European Commission has recently granted an additional
8 million Euro to extend the project activities until 2016. This will enable further development of the
OmpSs parallel programming model to automatically exploit multiple cluster nodes, transparent application
check pointing for fault tolerance, support for ARMv8 64-bit processors, and the initial design of the Mont-
Blanc Exascale architecture. Several new partners have joined this second phase of Mont-Blanc, including
Allinea, STMicroelectronics, INRIA, University of Bristol, and University of Stuttgart.

Mont-Blanc are looking for members of the European HPC industrial user eco-system to join our Industrial
End-User Group (IUG). As the project produces novel HPC technologies and solutions (i.e. low-energy HPC),
it will request the members of the IUG to validate these products and provide feedback to the projects in order
to align its objectives, deliverables and address issues such as end-user compatibility. An Industrial End-User
Group coordinator has been appointed to coordinate this process. The IUG will consist of representatives of
various industries, including, but not limited to Automotive, Energy, Oil/Gas, Aerospace, Pharma, and Financial.

The objective of this session is to:

- Familiarise the audience with the IUG: membership rules and obligations,

- Explain the processes of testing the Mont-Blanc technology,

- Share the latest project results,

- Instigate other industrial organisations to join or work closely with the IUG, and Collect feedback
and suggestions in relation to the IUG.

The session will have two parts:

- Technical – explaining the project, its achievements and the latest results available as above,

- Moderated discussion on the current and future work of the IUG.

PDF - 883.2 kb

Download presentation


Life Sciences

Wednesday 21 May – 16:00 to 17:20


Numerical Simulation of sniff in the respiratory system

Hadrien Calmet is a researcher at the Barcelona Supercomputing Center (CASE department),
Barcelona, Spain since October 2006. He works on:

- Pre-processing (mesh with Ansys ICEM), structured, unstructured, hybrid. Aerodynamic,
hydrodynamic, hemodynamic and engineering problems

- Post-processing (visualization) with Paraview, In-visu

- Film editor with Final Cut Pro for animations and documentary about Science.

His research topics are: Bio-mechanics, Extraction of Vortex. Implementation of In-house CFD code with
MPI, HDF5 libraries.

 

Direct numerical simulation (DNS) in the human nose-throat is a great challenge. As far as the author knows,
this is the first time that DNS is carry out in all the respiratory system. This massive simulation is very useful
to obtain a high level of detail in all the human nose-throat. The flow structure, the turbulence or the power
spectrum could be post-processed anywhere along the human conduct. Is the guarantee also, that the inflow
along the airway will be realistic. Simplified boundary conditions are not necessary.

Here a subject-specific model of the domain that extends from the face to the third branch generation in the
lung is used to carry out the simulation. This model is coming from an extraction of Computed Tomography
(CT). The inlet boundary condition is a profile on time of the flow rate during sniff (peak at 30l/min), it is modeled
with statistic analysis of a few patients.

When two unstructured meshes with finely resolved boundary layers are used, there are 44 millions and
350 millions of elements. The second is the result of a first using an parallel algorithm to produce an uniform
mesh multiplication, resulting finer mesh. The second mesh is used to detail the turbulence analysis and
ensure sufficient resolution of the first. Due to a lighter data analysis, the first mesh is generally used for the
description of the flow.

The complex flow forces to analyse each part of the large airways separately and is tending to explain the
main characteristics and main features of each region. The time scale is different in the nose and in the throat,
the physic also is different. In addition a large number of turbulence statistics are computed and the main
feature of the flow for each region is performed with the power spectra in few set points of each region and compared with the two different meshes.

PDF - 6.2 Mb

Download presentation

Large scale DFT simulation of a mesoporous silica based drug delivery system

Massimo Delle Piane has a Master Degree in Industrial Biotechnology and is now a PhD
student at the Department of Chemistry, University of Torino, Italy, under the supervision of
Prof. Piero Ugliengo. His thesis is devoted to quantum mechanical modeling of the interaction
between biomolecules and oxide surfaces, with particular interest in the study of silica
based materials for drug delivery purposes. He has been directly involved in two PRACE
project, for a total of 60 million core hours. He also collaborates with the developers of the
CRYSTAL simulation code, developed in the same department by the group headed by Prof.
Roberto Dovesi.

Marta Corno, University of Torino, Department of Chemistry and NIS (Nanostructured Interfaces and Surfaces)
Centre, Torino, Italy

Alfonso Pedone, University of Modena and Reggio Emilia, Department of Chemistry, Modena, Italy

Piero Ugliengo, University of Torino, Department of Chemistry and NIS (Nanostructured Interfaces and Surfaces)
Centre, Torino, Italy

 

Mesoporous materials are characterized by an ordered pore network with high homogeneity in size and very
high pore volume and surface area. Among silica-based mesoporous materials, MCM-41 is one of the most
studied since it was proposed as a drug delivery system. Notwithstanding the relevance of this topic, the at
omistic details about the specific interactions between the surfaces of the above materials and drugs and the
energetic of adsorption are almost unknown.

We resort to a computational ab-initio approach, based on periodic Density Functional Theory (DFT), to simulate
the features of the MCM-41 mesoporous silica material with respect to adsorption of ibuprofen, starting
from our previous models of a silica-drug system. We sampled the potential energy surface of the drug-silica
system by docking the drug on different spots on the pore walls of a realistic MCM model. The drug loading
was then gradually increased resulting in an almost complete surface coverage. Furthermore, we performed
ab-initio molecular dynamics simulations to check the stability of the interaction and to investigate the drug
mobility.

Through our simulations we demonstrated that ibuprofen adsorption seems to follow a quasi-Langmuirian
model. Particularly, we revealed that dispersion (vdW) interactions play a crucial role in dictating the features
of this drug/silica system. Finally, simulations of IR and NMR spectra provided useful information to interpret
ambiguous experimental data.

Simulations of this size (up to almost 900-1000 atoms), at this accurate (and onerous) level of theory, were
possible only thanks to the computational resources made available by the PRACE initiative. We have demonstrated
that the evolution of HPC architectures and the continuous advancement in the development of more
efficient computational chemistry codes have been able to take the Density Functional Theory approach out
of the realm of “small” chemical systems, directly into a field that just a few years ago was an exclusive of the
much less computationally demanding Molecular Mechanics methods. This opens the path to the accurate
ab-initio simulation of complex chemical problems (in material science and beyond) without many of the simplifications that were necessary in the recent past.

PDF - 8.4 Mb

Download presentation


Chemistry / Materials Science

Wednesday 21 May – 13:30 to 15:30


Ab initio modelling of the adsorption in giant Metal-Organic Frameworks: from small molecules
to drugs

Bartolomeo Civalleri graduated in chemistry (1995) and received his Ph.D. in chemistry
(1999) from the University of Torino. Since 2002, he joined the Theoretical Chemistry group
at the University of Torino as a faculty researcher. His current research is focused on ab-initio
modelling in solid state chemistry with particular interest in metal-organic frameworks,
hydrogen storage materials and molecular crystals. He is also involved in the development
of the CRYSTAL code.

M. Ferrabone, Department of Chemistry, University of Torino, Torino, Italy

R. Orlando, Department of Chemistry, University of Torino, Torino, Italy

 

Metal-Organic Frameworks (MOFs) are a new class of materials that are expected to play a huge impact in
the development of next-generation technologies. They consist of inorganic nodes connected through organic
linkers to form a porous three-dimensional framework. The combination of different nodes and linkers
makes MOFs very versatile materials with promising applications in many fields, including: gas adsorption,
catalysis, photo-catalysis, drug delivery, sensing and nonlinear optics.

We will show results on the ab-initio modeling of the adsorptive capacity of the so-called giant MOFs. They
possess pores with a very large size and, in turn, a huge surface area. Among giant MOFs, the most representative
one is probably MIL-100. It ideally crystallizes in a non-primitive cubic lattice with 2788 atoms in
the primitive cell. MIL-100 is characterized by the presence of a large number of coordinatively unsaturated
metal atoms exposed at the inner surface of the pores that are crucial in determining its adsorption capacity.
In particular, we are investigating MIL-100 for its ability of capture carbon dioxide, which is one of the hottest
topic in MOFs research, and the adsorption of large molecules such as drugs, for drug delivery purposes. The
project is ongoing and available results will be shown.

Giant MOFs, with thousands of atoms in the unit cell, represent a tremendous challenge for current
ab-initio calculations. The use of Tier-0 computer resources provided by PRACE is essential to tackle
this challenging problem. All calculations have been carried out with the B3LYP-D method by using
the massive parallel (MPP) version of the ab-initio code CRYSTAL (http://www.crystal.unito.it/).

PDF - 5.2 Mb

Download presentation

MPEG - 2.8 Mb

Download video

Ab Initio Quantum Chemistry on Graphics Processing Units: Rethinking Algorithms for
Massively Parallel Architectures

Jörg Kussmann received a PhD in Theoretical Chemistry from the University of Tübingen.
After pursuing post-doctoral research at the Pennsylvania State University, he became a
research scientist in the Theoretical Chemistry Group at the University of Munich (LMU).

His main focus is the extension of the applicability of quantum chemical methods to larger

systems or longer time-scales by developing low- or ideally linear-scaling ab initio methods
and the utilization of modern computing architectures, especially graphics processing units
(GPU).

Simon Maurer, University of Munich (LMU), Germany

Christian Ochsenfeld, University of Munich (LMU), Germany

 

Conventional ab initio calculations are limited in their application to molecular systems containing only a
few hundred atoms due to their unfavorable scaling behavior, which is at least cubical [O(N3)] for the most
simple mean-field approximations (Hartree-Fock, Kohn-Sham density functional theory). In the last two decades,
a multitude of methods has been developed that reduce the scaling behavior to linear for systems
with a significant HOMO-LUMO gap, allowing for the computation of molecular properties of systems with
more than 1000 atoms on single-processor machines.

The advent of general-purpose GPUs (GPGPU) in recent years promised significant speed-ups for scientific
high-performance computing. However, quantum chemical methods seem to pose a particularly difficult case
due to the heavy demand of computational resources. Thus, first implementations of the rate-determining
integral routines on GPUs were strongly limited to very small basis sets and employed intermediate sin-
gle-precision quantities. Furthermore, a straightforward and efficient adaptation of O(N) integral algorithms
for GPUs is not possible due to their inherent book-keeping, branching, random memory access, and process
interdependency.

We present general strategies and specific algorithms to efficiently utilize GPUs for electronic structure calculations
with the focus on a fine-grained data organization for efficient workload distribution, reducing inter-process
communication to a minimum, and minimizing the use of local memory.

Thus, we are able to use large basis sets and double-precision-only GPU-kernels in contrast to previously
suggested algorithms. The benefits of our approach will be discussed for the example of the calculation of the
exchange matrix, which is the by far most time-consuming step in SCF calculations.

Here, we recently proposed a linear-scaling scheme based on pre-selection (PreLinK) which has been proven
to be highly suitable for massively parallel architectures.

Thus, we are able to perform SCF calculations on GPUs using larger basissets to determine not only energies
and gradients, but also static and dynamic higher order properties like NMR shieldings or excitation
energies. Apart from discussing the performance gain as compared to conventional ab initio calculations on
a single server, we also compare different architectures based on CUDA, OpenCL, MPI/OpenMP, and MPI/
CUDA.

Furthermore, we present the – to our knowledge – first efficient use of GPUs for post-HF methods beyond the
mere use of GPUs for linear algebra operations at the example of second-order Møller-Plesset perturbation
theory (MP2).

PDF - 2.1 Mb

Download presentation

Shedding Light On Lithium/Air Batteries Using Millions of Threads On the BG/Q Supercomputer

Dr. Teodoro Laino, Research Staff Member – Mathematical and Computational Sciences,
IBM Research – Zurich.

Teodoro Laino received his degree in theoretical chemistry in 2001 (University of Pisa and
Scuola Normale Superiore di Pisa) and a doctorate in 2006 in computational chemistry at
the Scuola Normale Superiore di Pisa, Italy. His doctoral thesis, entitled “Multi-Grid QM/MM
Approaches in ab initio Molecular Dynamics” was supervised by Prof. Dr. Michele Parrinello

- one of the pioneers in this field.

From 2006 to 2008 he worked as a post-doctoral researcher in the research group of Prof. Dr. Jürg Hutter
at the University of Zurich, where he developed algorithms for ab initio and classical molecular dynamics
simulations. Since 2008 he has been working in the department of Mathematical and Computational Sciences
at the IBM Research Laboratory in Zurich. The focus of his research is on complex molecular dynamics
simulations for industrial-related problems in the field of energy storage, life sciences and nano-electronics.

V. Weber, IBM Research – Zurich, Rüschlikon, Switzerland

A. Curioni, IBM Research – Zurich, Rüschlikon, Switzerland

 

In 2009, IBM Research embarked into an extremely challenging project, the ultimate goal of which is to deliver
a new type of battery that will allow to drive an electric vehicle for 500 miles without intermediate recharging.
The battery considered the most promising candidate to achieve this goal is based on lithium and oxygen,
commonly known as Lithium/Air battery, potentially delivering energy densities one order of magnitude larger
than state-of-the-art electrochemical cells.

With few exceptions carbonate-based electrolytes, for instance propylene carbonate (PC) or ethylene carbonate
(EC), have been the preferred choice for most experimental setups related to Lithium/Air batteries to
date. By using massively parallel molecular dynamics simulations, we modeled the reactivity of a surface of
Li2O2 in contact with liquid PC, revealing the high susceptibility of PC to chemical degradation by the peroxide
anion.

Moreover, by using increasingly detailed and realistic simulations we were able to provide an understanding
of the molecular processes undergoing at the cathode of the Li/Air cell, showing that the electrolyte holds
the key role in non-aqueous Lithium/Air batteries in producing the appropriate reversible electrochemical
reduction.

A crucial point when modeling such complex systems is the level of accuracy of DFT calculations, which is
key for improving the predictive capabilities of molecular modeling studies and for addressing material discovery
challenges.

In order to achieve a reliable level of accuracy we implemented a novel parallelization scheme for a highly
efficient evaluation of the Hartree–Fock exact exchange (HFX) in ab initio molecular dynamics simulations,
specifically tailored for condensed phase simulations. We show that our solutions can take great advantage
of the latest trends in HPC platforms, such as extreme threading, short vector instructions and highly dimensional
interconnection networks. Indeed, all these trends are evident in the IBM Blue Gene/Q supercomputer.
We demonstrate an unprecedented scalability up to 6,291,456 threads (96 BG/Q racks) with a near perfect
parallel efficiency, which represents a more than 20-fold improvement as compared to the current state of the
art. In terms of reduction of time to solution we achieved an improvement that can surpass a 10-fold decrease
of runtime with respect to directly comparable approaches.

By using the PBE0 hybrid functional (HFX), so to enhance the accuracy of DFT based molecular dynamics,
we characterized the reactivity of different classes of electrolytes with solid Li2O2. In this talk, we present an
effective way to screen different solvents with respect to their intrinsic chemical stability versus Li2O2 solid
particles [3]. Based on these results, we proposed alternative solvents with enhanced stability to ensure an
appropriate reversible electro-chemical reaction and finally contribute to the optimization of a key technology
for electric vehicles.

PDF - 1.9 Mb

Download presentation


Environmental Science

Wednesday 21 May – 16:00 to 17:20 – Sala d’Actes


Next generation pan-European climate models for multi- and many-core architecture

Jun She received a PhD Lanzhou University in 1991 in Climate Dynamics. He has worked
on oceanography and climate modeling in China, Japan, USA on and Denmark in part 20
years. Since 2007 he has been a science manager at DMI’s Centre for Ocean and Ice. He
has (co-) authored 50 publications on modeling weather, ocean, wave, climate, marine ecosystems,
and optimal design of observational networks.

Jacob Weismann Poulsen, Danish Meteorological Institute

Per Berg, Danish Meteorological Institute

Lars Jonasson, Danish Meteorological Institute

 

To generate more consistent and accurate climate information for climate adaptation and mitigation, high resolution
coupled atmosphere-ocean-ice models are needed in large regional scale, e.g., pan-European and
Arctic-N. Atlantic scales. The computational load of these models can be hundreds times heavier than current
global coupled models (e.g. those used in IPCC AR5). The vision is to make the regional coupled models efficient on multi-and many-core architecture. To reach this goal, the most challenging part is the ocean model
optimization as the model domain is highly irregular with straits of a few hundred meter width to open ocean
in a scale of a few thousand kilometres. Based on achievements made in PRACE project ECOM-I (Next
generation pan-European coupled climate-ocean model – phase 1), this presentation will show methods and
results in optimizing a pan-European two-way nested ocean-ice model, with focusing on coding standard,
I/O, halo communication, load balance and multi-grid nesting. The optimization was tested on different architectures
e.g. Curie Thin, CRAY XT5/XT6 and Xeon Phi etc. The results also show that different model setups
lead to very different computational complexity. A single real domain setup for Baffin Bay shows scalability to
16000 cores and Amdahl ratio of >99.5%. However, a pan-European setup with 10 interconnected nesting
domains only reaches scalability of less than two thousand and Amdahl ratio 92%. Key issues on evaluating
computational performance of models, such as run2run reproducibility, scalability, Amdahl ration and their
relation with job size, ratio of computational points (wet points) and multi-grids will be addressed. Finally a
roadmap for next generation pan-European coupled climate models for many-core architecture is discussed.

Optimizing an Earth Science Atmospheric Application with the OmpSs Programming Model

George S. Markomanolis received his PhD in Computer Science from INRIA/ENS de Lyon
in 2014 on Performance Evaluation and Prediction of Parallel Applications. He holds a MSc.
in Computational Science from University of Athens, Greece and a BSc. in Mathematics from
University of Ioannina, Greece. He has been external collaborator to Wolfgang Pauli institute
in Vienna Austria where he parallelized some serial applications from the physics field. Afterwards,
he worked at CNRS’ computing center of the national institute of nuclear and particle
physics at France as engineer. Currently, he is senior engineer at Barcelona Supercomputing
Center at Earth Sciences department and his work is to optimize an Atmospheric model
and prepare it for the exascale machines. This work is done under Severo Ochoa program.

 

The Earth Sciences Department of the Barcelona Supercomputing Center (BSC) is working on the development
of a new chemical weather forecasting system based on the NCEP/NMMB multiscale meteorological
model. In collaboration with the National Centers for Environmental Prediction (NOAA/NCEP/EMC), the
NASA Goddard Institute for Space Studies (NASA/GISS), and the University of California Irvine (UCI), the
group is implementing aerosol and gas chemistry inlined within the NMMB model. The new modeling system,
namely NMMB/BSC Chemical Transport Model (NMMB/BSC-CTM), is a powerful tool for research in
physico-chemical processes occurring in the atmosphere and their interactions. We present our efforts on
porting and optimizing the NMMB/BSC-CTM model. This work is done under Severo Ochoa program and the
purpose is to prepare the model for large scale experiments and increase the resolution of the executed domain.
However, in order to achieve high scalability of our application is needed to optimize the various parts
of the code. It is well known through the discussion about the exascale era that the coprocessors will play
an important role. Currently there are two main types of coprocessors, GPUs and Intel Xeon Phi. In order to
use both approaches without the need to rewrite most of the code, the programming model OmpSs, which is
developed at BSC-CNS, is used. Through this procedure we extend the usage of our model by porting part
of our code to be executed on GPUs and Xeon Phi coprocessors. The performance analysis tool Paraver
is used to identify the bottleneck functions. Afterwards, the corresponding code is ported in OpenCL either
optimized for being executed on GPUs and Xeon Phi respectively. We execute our model with various configurations
in order to test it under extreme load by enabling the chemistry modules which take under consideration
much more species (water, aerosols, gas) and we observe that the bottleneck functions depend on
each case. We solve load balancing issues and whenever possible we take advantage of the available cores
from NVIDIA GPU and Intel Xeon Phi. To the best of our knowledge, the use of the programming model OmpSs
on an earth science application with future purpose to be used operationally is without any precedence.

PDF - 1.9 Mb

Download presentation


Automotive / Engineering

Wednesday 21 May – 13:30 to 15:30 – Room VS208


INCITE in the International Research Community

Julia C. White is the Innovative and Novel Computational Impact on Theory and Experiment
(INCITE) program manager. INCITE is a peer-review allocation program to award time on
the US Department of Energy’s leadership-class supercomputers at Oak Ridge and Argonne
National Laboratories. INCITE enables researchers around the world to carry out unprecedented scientific and engineering simulations. White provides leadership and oversight of
INCITE from the call for proposals through peer-review and final awards. She previously held
management roles at Oak Ridge and Pacific Northwest National Laboratories and at Physical
Review B, an international journal specializing in condensed-matter phenomena and-
materials physics. White holds a Ph.D. in chemistry from Indiana University-Bloomington
and an MBA from the University of Tennessee-Knoxville.

 

The Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program promotes
unprecedented scientific and engineering simulations through extremely large awards of computer time on
high-performance computers that are among the most powerful in the world. Successful INCITE projects deliver
high-impact science that could not otherwise be achieved without access to leadership-class systems at
the US Department of Energy’s Argonne and Oak Ridge Leadership Computing Facilities. INCITE does not
distinguish between funding sources or country of affiliation, instead selecting the research of highest impact
from the worldwide community of researchers.

Julia C. White, INCITE program manager, will highlight the history of the INCITE program and the role of
international researchers over the program’s ten-year history. White will describe the importance of a broad
geographical diversity of not just proposal applicants, but of peer-review panels that assess applications and
even the INCITE program itself.

Paul Messina of Argonne National Laboratory will speak about industry use of leadership-class resources;
White will focus on international access to these resources through the INCITE program.

PDF - 1.8 Mb

Download presentation

High fidelity multiphase simulations studying primary breakup

Mathis Bode is a research assistant and Ph.D. student in Prof. Pitsch’s group at the Institute
for Combustion Technology at RWTH Aachen University. He received his Master of Science
in Mechanical Engineering from RWTH Aachen University in 2012. His research interests
include high fidelity simulations of multiphase flows on massively parallel computers.

 

A variety of flows encountered in industrial configurations involve both liquid and gas. Systems to atomize liquid
fuels, such as diesel injection systems, are one example. The performance of a particular technical design
depends on a cascade of physical processes, originating from the nozzle internal flow, potential cavitation,
turbulence, and the mixing of a coherent liquid stream with a gaseous ambient environment. This mixing
stage is critical, and the transfer occurring between liquid and gas is governed by an interface topology.

The most serious gap in understanding of spray formation is primary breakup, but it is also the first physical
process to be modeled. This means that uncertainties in the modeling of primary breakup will influence, for
example, the design and performance of atomizers in diesel combustion systems all the way down to emission
and pollutant formation.

Typical diesel injection systems have outlet diameters of the order of 100 micrometers and the resulting
smallest droplets and turbulent structures are even much smaller. This illustrates two of the major problems
for studying primary breakup: First, experiments characterizing the atomization process are very difficult due
to the small length scales. Second, huge meshes are required for simulating primary breakup because of the
necessity to resolve the broad spectrum of length scales in play within a single simulation. Thus, studying
primary breakup is not possible without using massively parallel code frameworks.

We use the CIAO code which was already run on up to 65000 parallel cores on SuperMuc in connection with
recently developed highly accurate interface tracking methods. This so-called 3D unsplit forward/backward
Volume-of-Fluid method that is coupled to a level set approach overcomes the traditional issues of mass conservation
and interface curvature computation in the context of multiphase simulations. Due to its robustness,
it also enables the simulation of arbitrarily high density ratios.

In this project, a novel approach combining spatial and temporal jet simulations of multiphase flows is used to study
primary breakup from first principles. The results of these high fidelity multiphase simulations are used to further
the understanding and accurate modeling of primary breakup in turbulent spray formation of industrial relevance.

PDF - 1.7 Mb

Download presentation

Fluid saturation of hydrocarbon reservoirs and scattered waves: Numerical experiments and
field study

Professor Dr. Vladimir A. Tcheverda is Head of the Department of Computational Methods
in Geophysics, at the Trofimuk Institute of Petroleum Geology and Geophysics of the Siberian
Branch of the Russian Academy of Sciences in Novosibirsk. He is Full Professor at the
Mathematical Department of the Novosibirsk State University (NSU) and chair of “Mathematical
Methods in Geophysics”.

His current research interests are: True amplitude prestack migration and full waveform inversion; Newton-like approaches to resolve non-linear ill-posed problems and their application for reliable numerical resolution
of inverse problems of wave propagation for heterogeneous elastic media (multicomponent seismic
data inversion and imaging); finite-difference/finite element simulation of seismic wave propagation through
multiscale media (cavernous fractured reservoirs).

V. Lisitsa, Novosibirsk State University, Russia

A. Merzlikina, Novosibirsk State University, Russia

G. Reshetova, Novosibirsk State University, Russia

 

Over the last decade the use of scattered waves took a significant place among the wide range of modern
seismic techniques. But so far their main area of application is spatial localization of clusters of subseismic-
scale heterogeneities, like cracks, fractures and caverns, in other words these waves are using just in
order to say “yes” or “no” to the presence of this microstructure. Therefore the main goal of our efforts within
the framework of the PRACE Project Grant 2012071274 (supercomputer HERMIT at Stutgart University) is
to understand which kind of knowledge about the fine structure of the target object like a cavernous fractured
reservoir can be achieved from this constituent of the full seismic wave field. The key instrument for the studying
the scattering and diffraction of seismic waves in realistic models is a full scale numerical simulation. In
order to describe correctly waves’ propagation in media with heterogeneities of both large scale (3D heterogeneous background) and fine scaIe (distribution of caverns and fracture corridors) we apply finite-difference
schemes with local refinement in time and space. On this base we are able to simulate wave propagation in
very complicated realistic models of 3D heterogeneous media with subseismic heterogeneities.

This simulation was done for realistic digital model derived from all available data about some specific deposits.
It happens that fluid saturation has very specific impact in synthetic seismic image which can be used as
predictive criterion in real life data processing and interpretation. This criterion is confirmed by real life deep
well.

PDF - 5.3 Mb

Download presentation


Astrophysics and mathematics
Wednesday 21 May – 16:00 to 17:20 – Room VS208


EAGLE: Simulating the formation of the Universe

Prof. Richard Bower studies the Universe, aiming to understand the formation and evolution
of galaxies. His work covers both theoretical and observational aspects, ranging from
the development of new observing techniques to the creation of new theoretical models for
the interaction of galaxies and the black holes that they host. Most recently,
he has been developing multi-scale techniques that allow direct hydrodynamic simulation of
galaxies within a representative volume of the Universe. This program, the EAGLE project,
has created a fascinating virtual universe which captures the properties of the observed
universe well.
Prof. Bower holds a Professorship at the Institute of Computational Cosmology at Durham University. He
lectures courses in Cosmology and Astronomical Statistics as well as developing new course material in
computational physics. He is also a part of the Ordered Universe project, a collaboration between physicists
and historians investigating the 13th century scientific works of Robert Grosseteste.

 

The EAGLE (Evolution and Assembly of Galaxies and their Environments) project aims to create a realistic
virtual universe on the PRACE computers. Through a suite of state of the art hydrodynamic simulations,
the calculations allow us to understand how the stars and galaxies we see today have grown out of small
quantum fluctuations that are seeded in the big bang. The simulations track and evolve dark matter and dark
energy using physical processes such as metal dependant gas cooling, the formation of stars, the explosion
of supernovae and the evolution of giant black holes. The resolution of the simulations is sufficient to resolve
the onset of the Jeans instability in galactic disks, allowing us to study the formation of individual galaxies in
detail. At the same time the largest calculation simulates a volume that is 100 Mpc on each side, recreating
the full range of galaxy environments from the isolated dwarves to dense rich galaxy clusters.

During my talk I will explain why this is a formidable challenge. The physics of galaxy formation couples
the large scale force of gravity to the physics of star formation and black hole accretion. In principle, the
simulation needs to cover a dynamic range of at least 108 in length scale (from 100 Mpc to 1 pc). To make
matters worse, these scales are strongly coupled. While the small-scale phenomena are driven by large
scale collapse, the small scale also generate feedback by generating gas flows on large scales. Even with
large computer time allocations on the fastest computers available today, this is impossible and we must
adopt a multi-scale approach.

A key philosophy of the EAGLE simulations has been to use the simplest possible sub-grid models for star
formation and black hole accretion, and for feedback from supernovae and AGN. Using a stochastic approach,
efficient feedback is achieved without hydrodynamic decoupling of resolution elements. The small number of
parameters in these models are calibrated by requiring that the simulations match key observed properties of
local galaxies. Having set the parameters using the local Universe, I will show that the simulations reproduce
the observed evolution of galaxy properties extremely well.

The resulting universe provides us with deep insight into the formation of galaxies and black holes. In particular,
we can use the simulations to understand the relationship between local galaxies and their progenitors at
higher redshift and to understand the role of interactions between galaxies and the AGN that they host. I will
present an overview of some of the most important results from the project, and discuss the computational
challenges that we have met during the project. In particular, we found it necessary to develop a new flavour
of the Smooth Particle Hydrodynamics (SPH) framework in order to avoid artificial surface tension terms.

The improved formulation has the potential to influence other areas of numerical astronomy and could also
be used in more industrial applications such as turbine design or tsunami prevention where the SPH technique is commonly used.

The EAGLE project has shown that it is possible to simulate the Universe in unprecedented realism using
an extremely simple approach to the multi-scale problem. It has allowed us to meet the grand challenge of
understanding the origin of galaxies like our own Milky Way. I will briefly describe what can be learned from
the novel approach to sub-grid physics and potentially applied to other areas.

PDF - 62.2 Mb

Download presentation

A massively parallel solver for discrete Poisson-like problems

Yvan Notay holds a PhD in Applied Science from the University of Brussels (ULB). He spent
most of his career at the F.R.S.-FNRS, with ULB as main place of work. He is Research Director
since 2007. He is an expert in numerical linear algebra, especially in iterative methods
for the solution of (very) large sparse linear systems. Outside his research community, he is
mainly known as the author of the AGMG software package. This latter offers to non-experts
a fairly easy to use implementation of an algebraic multigrid method, which solves linear
systems from scalar elliptic PDEs in linear time.

 

AGMG (AGgregation-based algebraic MultiGrid solver) is a software package that solves large sparse systems of linear equations; it is especially well suited for discretized partial differential equations. AGMG is an
algebraic solver that can be used black box and thus substitute for direct solvers based on Gaussian elimination.
It uses a method of the multigrid type with coarse grids obtained automatically by aggregation of the
unknowns. Sequential AGMG is scalable in the sense that the time needed to solve a system is (under known
conditions) proportional to the number of unknowns.

AGMG is also a parallel solver since the beginning of the project in 2008. Within the framework of a PRACE
project, we faced the challenge to port it on massively parallel systems, with up to several hundred thousands
of cores. Some relatively simple yet not straightforward adaptations were needed. Thanks to them,
we obtained excellent weak scalability results: when the size of the linear system to solve is increased pro
portionally to the number of cores, the time is first essentially constant, and then increases but moderately,
the penalty never exceeding a factor of 2 (this maximal factor is seen on JUQUEEN when using more than
370,000 cores, that is, more than 80% of the machine ranked eighth in the top 500 supercomputer list). More
importantly, when considering scalability results, one should never forget that their relevance depends on the
quality of the sequential code one starts from. And comparative tests show that, on a single node, our solver
is more than 3 times faster than HYPRE, which is often considered as the reference parallel solver for the
considered type of linear systems.

PDF - 1.2 Mb

Download presentation


SHAPE

Wednesday 21 May – 13:30 to 15:30 – Room VS217


The SHAPE Programme for Competitive SMEs in Europe

Giovanni Erbacci graduated in Computer Science at the University of Pisa in Italy. Since
1999 to 2011 he coordinated the HPC Group, responsible for HPC support and consultancy
in CINECA, promoting HPC activities and methodologies, co-operating both with academic
institutions and European initiatives.

Currently he leads the HPC Projects Division in the Supercomputing Applications and Innovation
Department at CINECA, supporting research and infrastructure HPC projects, both at
European and national level. G.E. participated in different EC projects since the IV FP; and
he is active in PRACE since the beginning.

He is a member of the PRACE Technical Board and in PRACE 3IP leads the Services for Industrial Users &
SMEs activity.

Giovanni Erbacci has a wide experience in the field of computational sciences, parallel architectures, parallel
programming models, scaling applications, performance evaluation. Since 1992, he organises and directs
the CINECA’s Summer School on Parallel Computing.

Giovanni Erbacci is the author or the co-author of several papers published in journals and conference proceedings
and he is a member of the ACM.

 

The adoption of HPC technologies in order to perform wide numerical simulation activities, investigate complex
phenomena and study new prototypes is crucial to help SMEs to innovate products, processes and
services and thus to be more competitive.

SHAPE, the SME HPC Adoption Programme in Europe is a new pan-European programme supported by
PRACE. The Programme aims to raise awareness about HPC among European SMEs and provide them with
the expertise necessary to take advantage of the innovation possibilities created by HPC, thus increasing
their competitiveness. The programme allows SMEs to benefit from the expertise and knowledge developed
within the top-class PRACE research infrastructure.

The programme aims to deploy progressively a set of complementary services towards SMEs such as in
formation, training, access to computational expertises for co developing a concrete industrial project to be
demonstrated using PRACE HPC resources.

The SHAPE Pilot is a trial programme issued to prove the viability and the value of the SHAPE Programme,
with the objective to refine the details of the initiative and prepare its launch in a fully operational way. The
Pilot works with ten selected SMEs to introduce HPC-based tools and techniques into their business, operational,
or production environment.

This session presents some preliminary results of the Pilot, showing the work carried out together with the
selected SMEs to adopt HPC solutions.

PDF - 1.8 Mb

Download presentation

Design improvement of a rotary turbine supply chamber through CFD analysis

Roberto Vadori graduated in 1989 with a degree in Mechanical Engineering from Politecnico
di Torino. where he got a PhD on Machine Design in 1995. In the same year he became
Assistant Professor in Engineering Faculty, group of Machine Design. Starting from 2000 he
gave lectures on Computational Mechanics at University of Rome „La Sapienza“, Fraunhofer
Institut Bremen, and Kaiserslauten University. During the year 2001 he moved to Modena
University, as Associate Professor. In 2003 he started to work in Industry, namely in Altair
Engineering as a Researcher in the Methodology and Training Group. He kept the tenure of Finite Element Method classes from 2004 till 2007 in Politecnico di Torino, site of
Alessandria and he was invited lecturer during all the academic years in the Course of Chassis Design of
Engineering Faculty, Politecnico di Torino and the PhD Summer School in Machine Design. He his author of
more than 80 papers published in referred national and international journals. Currently he holds the position
of director of numerical and mathematical modelling and design activities in Thesan and Savio.

Claudio Arlandini got a PhD in Nuclear Astrophysics at the University of Heidelberg. He
worked as business manager in the area of IT infrastructure management and data center
operations at CILEA Interuniversity Consortium, and since the merging of CILEA in CINECA
he is involved in simulation and technology transfer services for industries. He was WP9
“Industrial Applications Support” Leader In PRACE2IP and he is coordinator of CINECA activities in the I4MS Project Fortissimo.

 

This work deals with the optimization of a volumetric machine. The machine is under active development, and
a prototype is already working and fully monitored in an experimental mock-loop setup. This prototype operates under controlled conditions on a workbench, giving as an output the efficiency of the machine itself. Main
goal is to obtain an increased efficiency through the design and realization of the moving chambers in which
fluid flows. In order to obtain such a task, an extensive CFD modeling and simulation is required to perform
virtual tests on different design solutions to measure the physical quantities assessing the performance of a
given geometry. The final goal is to design a better geometry of the different components, mainly the supply
and exhaust chambers, cutting down time and resources needed to realize a physical prototype and to limit
the physical realization only on a single geometry of choice. The modeling should allow then, through an optimization
strategy, to perform parametric studies of key parameters of the design of the moving chambers in
which fluid flows, in order to identify the main geometrical parameters able to drive the optimal configuration.

High Performance Computing facilities and Open-Source tools, such as OpenFOAM, are therefore of capitol interest to handle the complex physical model under consideration and to perform a sufficient amount of
design configuration analysis.

PDF - 1.8 Mb

Download presentation

Electromagnetic simulation for large model using HPC

José M. Tamayo was born in Barcelona, Spain, on October 23, 1982. He received the degree
in mathematics and the degree in telecommunications engineering from the Universitat
Politècnica de Catalunya (UPC), Barcelona, both in 2006. He received the Ph.D. degree in
telecommunications engineering from the Universitat Politècnica de Catalunya (UPC), Barcelona,
in 2011.

From 2004 to 2011, he stayed at the Telecommunications Department, Universitat Politècnica
de Catalunya (UPC), Barcelona.

From April 2011 to April 2012 he worked as a postdoc at DEOS department, ISAE, Toulouse, France. In May
2012, he joint Entares Engineering, now Nexio Simulation, Toulouse, France. His current research interests
include accelerated numerical methods for solving electromagnetic problems.

Pascal de-Reseguir, NEXIO Simulation

 

Nexio Simulation has recently started migrating from an electromagnetic simulation software (CAPITOLE-EM)
developed for regular Personal Computers to High Performance Computing systems (CAPITOLE-HPC). This
has been possible thanks first to the French HPC-PME initiative and then to the European Shape project.
HPC-PME initiative is a project targeted to help and encourage Small and Medium size Enterprises (SME)
towards HPC. Under the Shape project we expect to scale-up this initial step in the sense of computational
time, resource usage and optimization.

The industry has become more and more exigent asking for simulation of very large problems. In particular,
in the electromagnetic environment, we can fall very rapidly into full linear systems with several millions of un
knowns. The solution of these systems requires some matrix compression techniques based on the physics
of the problem and mathematical algorithms. When these techniques are not enough it claims for the use of

HPC with a good number of CPUs and a large amount of memory. The main workload in the migration to HPC
systems is the parallelization of the code, trying to optimize the machine usage as well as a good memory
treatment depending on the architecture of the particular machine.

PDF - 1.4 Mb

Download presentation

Novel HPC technologies for rapid analysis in bioinformatics

Dr Paul Walsh is the Chief Technology Officer of NSilico (www.nsilico.com), provider of the
world’s most easy-to-use data management and analytics software for the life sciences and
health care industries. He is also a Research Fellow in the Cork Institute of Technology (CIT)
and a Senior Visiting Research Fellow at the University of Edinburgh where he manages
research in medical informatics and bioinformatics.

He holds a Ph.D., M.Sc. and B.Sc. Hons in Computer Science from the National University of Ireland and
has a long list of publications including outstanding paper awards. He was recently awarded a distinction in
Project Management and has consulted on a wide range of projects ranging from start-up technology companies
to managing projects for global corporations. He is funded under national and international research
schemes such as the EU FP7 program where he oversees research in data analytics, machine learning
and high performance computing. He sits on numerous committees and editorial boards including Landes
Sciences Journal Bioengineered (https://www.landesbioscience.com/jo…). His latest research is
focussed on bringing innovative high performance computation techniques to bear on big data problem in
bioinformatics.

 

NSilico is an Irish based SME that develops software to the life sciences sector, providing bioinformatics and
medical informatics systems to a range of clients. One of the major challenges that their users face is the
exponential growth of high-throughput genomic sequence data and the associated computational demands
to process such data in a fast and efficient manner. Genomic sequences contain gigabytes of nucleotide
data that require detailed comparison with similar sequences in order to determine the nature of functional,
structural and evolutionary relationships. In this regard Nsilico has been working with computational experts
from CINES (France) and ICHEC (Ireland) under the PRACE SHAPE programme to address a key problem
that is the rapid alignment of short DNA sequences to reference genomes by deploying the Smith-Waterman
algorithm on an emerging many-core technology, the Intel Xeon Phi co-processor. This presentation will give
an overview of the technical challenges that have been overcome during this project, performance achievements
and implications, as well as our immensely positive experience in working with PRACE within this
successful collaboration.

PDF - 2.5 Mb

Download presentation

HPC application to improve the comprehension of ballistic impacts behaviour on composite
materials

Paolo Cavallo got a MSc in Nuclear Engineering at Politecnico di Torino. He has more than
20 years of experience in CAE and methodologies development. He was Project Manager at
FIAT Research Center, Technical Director at Altair Engineering Italy, and General Manager
at ISDG Research Center. He is now Technical Director of AMET Srl. AMET’s mission is to
provide its customers with best-in-class solutions – i.e. methodologies, technologies and
engineering services – for the design and development of industrial products, exploiting an
integrated multi-domain model-based approach, to assure optimum system performance.

Claudio Arlandini got a PhD in Nuclear Astrophysics at the University of Heidelberg. He
worked as business manager in the area of IT infrastructure management and data center
operations at CILEA Interuniversity Consortium, and since the merging of CILEA in CINECA
he is involved in simulation and technology transfer services for industries. He was WP9
“Industrial Applications Support” Leader In PRACE2IP and he is coordinator of CINECA activities in the I4MS Project Fortissimo.

 

The damage phenomenon occurring on composite materials when subjected to a ballistic impact is a complex
problem.

Therefore, the understanding of the influence of the parameters describing the material behavior is not a
straightforward task; moreover, due to the fact that these influences are mutually connected, the task of designing
a new structure with improved characteristics in terms of resistance to ballistic impacts is a very hard
one. Only resorting to a massive use of DOE analyses, supported by suitable computing resources, may lead
to a better understanding of the problem and to a definition of the parameters mostly influencing the physical
phenomenon.

We present an overview of the methodology used in this research together with the first results obtained, and
their relevance in the context of composite materials industrial manufacturing.

PDF - 736.6 kb

Download presentation

PRACE SHAPE Project: OPTIMA pharma GmbH

Ralph Eisenschmid, born in Germany, studied and graduated in Process Engineering at the
University of Stuttgart. With long term experience in R&D and plant engineering, he entered
Optima pharma in 2011 as R&D engineer for process development. In 2012 he successfully
introduced numerical methods and simulations at Optima pharma by using commercial
multiphysics toolboxes. With friendly help and consulting of HLRS members (HPC center of
the University of Stuttgart) he discovered the advantages and performance of opensource
toolboxes like OpenFOAM in CFD issues. Since 2013 he is running CFD simulations on
large HPC systems at the HLRS. His first experiences in HPC started in running air flow
simulations in clean rooms with OpenFOAM.

B. Große-Wöhrmann, Bärbel, HLRS

 

OPTIMA pharma produces and develops filling and packaging machines for pharmaceutical products. Sterile
filling lines are enclosed in clean rooms, and a detailed and reliable knowledge of the airflow inside the
clean rooms would enhance the design of the filling machines and support the CAE job. The goal of this
project is to simulate the airflow with OpenFOAM meeting the requirements of industrial production.

We looked for the best strategy for the generation of very large meshes including domain decomposition and
reconstruction using the standard tools provided by OpenFOAM. Then, we tested and compared different
turbulence models on large meshes and studied the scalability of the relevant OpenFOAM solvers. Overall,
we found a compromise between the required mesh resolution and the feasible mesh size which allows reliable
simulations of the airflow in the entire clean room. We found serial tools like decomposePar as walltime-
and memory-critical bottlenecks in performing CFD with OpenFOAM on large grids with mesh sizes larger
than 50 M cells. Results will be presented at the talk.

PDF - 12.8 Mb

Download presentation

Testing LES turbulence models in race boat sail

SHAPE Pilot Project – with the involvement of Juan Yacht Design

Herbert Owen, graduated in Mechanical Engineer at the Universidad de Buenos Aires, Argentina.
He received his in PhD in Civil Engineering from the Technical University of Catalonia
(UPC), Barcelona, Spain. 2009.

He started his research activity in Computational Fluid Dynamics as a Junior Researcher at
the Center for Industrial Research of the Techint Organization, a company involved in steelmaking
in Argentina in 1999.

In 2003 he moved to Barcelona to start his PhD on “A Finite Element Model for Free Surface and Two Fluid
Flows on Fixed Meshes” at the UPC Technical University of Catalo- nia which he finished in 2009. Then he
moved to the Barcelona Supercomputing Center where he continues to work mainly in Computational Fluid
Dynamics using the Finite Element Method and participates in the development of the parallel code Alya.He
is author of more than ten articles in reviewed journals and a similar number of conference publications. His
main research areas are: free surface and two fluid flows, mould filling problems, ship hydrodynamics, turbulence
modeling, pressure segregation schemes and finite element stabilization techniques.

 

Currently, race boat design depends more heavily on the CFD modeling of turbulent free surface flows than
on tank and wind tunnel testing. Simulations are cheaper, faster and more reliable than traditional tests for
boat design. Enhanced flow visualization and force decomposition, provides much richer information than
the one measurable in tank tests leading to a much better understanding of the flow phenomena. The early
adoption of RANS CFD has been a key competitive advantage in the design of America’s Cup and Volvo
Ocean Race wining boats. Nowadays commercial RANS CFD codes have become standard practice and
more innovative simulation tools would provide a technological advantage. RANS models work well for most
problems but their accuracy is reduced when there are important regions of separated flow. This happens
at the boat sails for certain wind directions. Large eddy simulation (LES) turbulence models are needed for
such flows.

In this work, we test LES models implemented in the finite element CFD code Alya for the flow around boat
sails in conditions where RANS models fail. Alya uses a Variational Multiscale formulation that can take into
account the LES modeling relying only on the numerical model. Alternatively eddy viscosity models such as
the WALE model can be used. The results obtained with these models will be compared to results obtained
with RANS on the same mesh to allow the company JYD to have a better idea of the advantages this new
technology could contribute to their work and the feasibility of incorporating it to their available tools.


Plenary Session

Thursday 22 May – 09:00 to 12:30


Observing the bacterial membrane through molecular modeling and simulation

Matteo Dal Peraro, Ph.D. is Tenure Track Assistant Professor at the School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL). He is Head of the Laboratory for
Biomolecular Modeling (LBM). His research at the LBM, within the Interfaculty Institute of
Bioengineering (IBI), focuses on the multiscale modeling of large macromolecular systems.

 

The physical and chemical characterization of biological membranes is of fundamental importance for under
standing the functional role of lipid bilayers in shaping cells and organelles, steering vesicle trafficking and
promoting cellular signaling. In bacteria this cellular envelop is highly complex, providing a robust barrier to
permeation and mechanical stress and an active defense to external attack. With the constant emergence
of drug resistant strains that poses a serious threat to global health, understanding the fine molecular details
of the bacterial cellular wall is of crucial importance to aid the development of innovative and more efficient
antimicrobial drugs. In this context, molecular modeling and simulation stand as powerful resources to probe
the properties of membranes at atomistic level. In this talk I will present the efforts of my laboratory (i) to cre
ate better models of bacterial membrane constituents, (ii) to develop efficient tools for assembling realistic
bacterial membrane systems, and (iii) to investigate their interactions with signaling protein complexes and
antimicrobial peptides, exploiting the computational power of current HPC resources.

PDF - 48.9 Mb

Download presentation

Observations on the evolution of HPC for Science and Industry

Dr. Paul Messina is Director of Science at the Argonne Leadership Computing Facility
(ALCF) of Argonne National Laboratory. Previously, Dr. Messina served as founding Director
of California Institute of Technology’s (Caltech) Center for Advanced Computing Research,
as Assistant Vice President for Scientific Computing, and as Faculty Associate for Scientific
Computing, Caltech.

While at Caltech he conceived, formed, and led the Consortium for Concurrent Supercomputing, which
created and operated the Intel Touchstone Delta System, at that time the world’s most powerful scientific
computer, and held a joint appointment at the Jet Propulsion Laboratory as Manager of High-Performance

Computing and Communications. During his Caltech years he also served as Principal Investigator for the
CASA gigabit network testbed, as Chief Architect for the National Partnership for Advanced Computational
Infrastructure (NPACI), as principal investigator for the Scalable I/O Initiative, and as co-principal investigator
for the National Virtual Observatory and TeraGrid.

During a leave from Caltech in 1999-2000, he led the DOE-NNSAAccelerated Strategic Computing Initiative.

In his first association with Argonne from 1973-1987, he held a number of positions in the Applied Mathematics
Division and was the founding Director of the Mathematics and Computer Science Division.

 

Scientific computing has advanced dramatically during the last four decades, despite several upheavals in
computer architectures. The evolution of high-end computers in the next decade will again pose challenges
as well as opportunities. The good news is that many applications are able to utilize today’s massive levels
of parallelism, as will be shown by presenting a sampling of varied scientific, engineering, and industrial applications
that are using high-end systems at the Argonne Leadership Computing Facility and other centers.

As we look towards the use of exascale computers, availability of application software and building blocks is
as always a key factor. This is especially the case for industrial users but is also true for many academic and
research laboratory users. Support is needed to enable the transition of widely used codes, programming
frameworks, and libraries to new platforms and evolution of capabilities to support the increased complexity
of the applications that are enabled by the more powerful systems.

Providing access to state-of-the-art systems — and training on their use — to interested industrial and academic
researchers in an effect approach and should be used more widely. Training is also an important factor
in enabling the productive use of HPC. Few university courses teach scientists and engineers how to use effectively
leading-edge HPC platforms, software engineering practices, how to build and maintain community
codes, what high-quality software tools and building blocks are available, and how to work in teams — yet all
those skills are necessary in the use of HPC.

Finally, close involvement of applications experts in guiding the design of future hardware and
software, supplemented by funding to address development of key technologies and features,
has proven to be effective and will be needed more than ever in the exascale era and beyond.

PDF - 5.7 Mb

Download presentation

Economic and scientific impact of collaboration between science and industry

Moderator:

An award-winning senior science writer and national newspaper journalist, Dr
Tom Wilkie co-founded Europa Science in 2002. With a background in mathematics and the
owner of a PhD in the theory of elementary particle physics, he is a former Features Editor of
New Scientist, former Science Editor for The Independent, and former Head of Bio-Medical
Ethics at the Welcome Trust. He now serves as Editor-In-Chief across all Europa Science
publications.

Panel:

Luís O. Silva is Professor of Physics at Instituto Superior Técnico, Lisbon, Portugal, where he leads the Group for Lasers and Plasmas. He obtained his degrees (MSc 1992, PhD 1997 and Habilitation 2005) from IST. He was a post-doctoral researcher at the University of California Los Angeles from 1997 to 2001. His scientific contributions are focused in the interaction of intense beams of particles and lasers with plasmas, from a fundamental point of view and towards their applications for secondary sources for biology and medicine.

Luís O. Silva has authored more than 150 papers in refereed journals and three patents, and has given invited talks at the major plasma physics conferences and served on the program and selection committees of conferences and prizes in Europe, US and Japan. He is a member of the International Scientific Advisory Board of ELI – Beamlines, of the Scientific Steering Committee of PRACE, and of the National Council for Science and Technology (reporting to the Prime Minister of Portugal). He has supervised 6 PhD students and 7 post-doctoral fellows whose work has led to several national and international prizes. He was PI in more than 20 projects funded by the Portuguese Science Foundation, ESA and EU, in EU supercomputing projects, by NVIDIA, and the Rutherford Appleton Laboratory. He was awarded an Advanced Grant from the European Research Council in 2010, being the youngest in “Fundamental Constituents of Matter” and one of the youngest scientists overall to be awarded an Advanced Grant.

He was awarded the 2011 Scientific Prize of the Technical University of Lisbon, the IBM Scientific Prize 2003, the 2001 Abdus Salam ICTP Medal for Excellence in Nonlinear Plasma Physics by a Young Researcher, and the Gulbenkian Prize for Young Researchers in 1994. He was elected Fellow of the American Physical Society and to the Global Young Academy in 2009.

Jean-François Lavignon joined Bull in 1998, where he is in charge of collaborative R&D.

At Bull, he has been involved in research strategy and developing emerging businesses. Before joining Bull he served in several positions related to IT research. He has experience in parallel computing, computer architecture and signal and image processing. Jean-François Lavignon graduated from Ecole Polytechnique in 1984 and ENSTA (Ecole Nationale des Techniques Avancées) in 1986. He then spent one year at Stanford University as invited researcher. He is now the Chairman of ETP4HPC, the European Technology Platform for HPC.

Michael E. Papka, PhD is a computer scientist whose research is focused on the visualization
and analysis of large data from simulation and experimental sources. His interests
include the use of advanced technology to enhance this research and to enable
scientific collaboration. He is the director of the Argonne Leadership Computing Facility
(ALCF), home to one of the world’s fastest supercomputers dedicated to open science,
and the deputy associate laboratory director of the Computing, Environment and Life
Sciences (CELS) directorate at Argonne, where he supports programmatic efforts that
contribute to or benefit from high performance computing. In addition to his duties and
research efforts at Argonne, Mike is a member of the computer science faculty at Northern
Illinois University, where he teaches courses on data visualization, data structures,
and algorithm analysis. He is also a senior fellow of the University of Chicago/Argonne
Computation Institute. Mike earned a master’s degree and doctorate in computer science from the University
of Chicago, a master’s degree in computer science and electrical engineering from the University of Illinois at
Chicago, and a bachelor’s degree in physics from Northern Illinois University.

Dr. Francine Berman is the Edward P. Hamilton Distinguished Professor in Computer Science at Rensselaer Polytechnic Institute. She is a Fellow of the Association of Computing Machinery (ACM) and a Fellow of the IEEE. In 2009, Dr. Berman was the inaugural recipient of the ACM/IEEE-CS Ken Kennedy Award for “influential leadership in the design, development, and deployment of national-scale cyberinfrastructure.”

Prior to joining Rensselaer, Dr. Berman was the High Performance Computing Endowed Chair in the Jacobs School of Engineering at UC San Diego. From 2001 to 2009, Dr. Berman served as Director of the San Diego Supercomputer Center (SDSC) where she led a staff of 250+ interdisciplinary scientists, engineers, and technologists. From 2009 to 2012, she served as Vice President for Research at Rensselaer Polytechnic Institute, stepping down in 2012 to lead U.S. participation in the Research Data Alliance (RDA), an emerging international organization created to accelerate global data sharing and exchange. Dr. Berman is co-Chair of the inaugural leadership Council of the RDA and Chair of RDA/United States.

Dr. Berman currently serves as co-Chair of the National Academies Board on Research Data and Information, as Vice-Chair of the Anita Borg Institute Board of Trustees, and is a member of the National Science Foundation CISE Advisory Board. From 2007-2010, she served as co-Chair of the US-UK Blue Ribbon Task Force for Sustainable Digital Preservation and Access. For her accomplishments, leadership, and vision, Dr. Berman was recognized by the Library of Congress as a “Digital Preservation Pioneer”, as one of the top women in technology by BusinessWeek and Newsweek, and as one of the top technologists by IEEE Spectrum.

Alexander F. Walser is managing director at the Automotive Simulation Center Stuttgart e.V. – asc(s. He received a diploma in Civil Engineering in the subject area modelling and simulation methods from University of Stuttgart in 2011. After completing his studies he worked on research projects in the field of structural mechanics, crashworthiness, shape and topology optimization. Since 2013 he is responsible for acquiring and managing HPC-projects and new research fields at the asc(s.

Dr. Augusto Burgueño Arjona is currently Head of Unit “eInfrastructure” at European Commission Directorate General for Communications Networks, Content and Technology and General manager and coach at Coach Mundi ASBL. Previously he served as Head of Unit “Finance” Directorate General for Communications Networks, Content and Technology at European Commission and Head of inter-Directorate General Task Force IT Planning Office at European Commission.

Matteo Dal Peraro, Ph.D. is Tenure Track Assistant Professor at the School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL). He is Head of the Laboratory for Biomolecular Modeling (LBM). His research at the LBM, within the Interfaculty Institute of Bioengineering (IBI), focuses on the multiscale modeling of large macromolecular systems.

Kenneth Ruud, Chair of the PRACE SSC

Jürgen Kohler, Chair of the PRACE IAC

 

One of the overarching goals of Horizon 2020 is to foster economic growth and create jobs in Europe. By connecting
science and innovation, Horizon 2020 is helping to achieve this while putting emphasis on excellent
science, industrial leadership, and tackling societal challenges.

The panelists will discuss opportunities and challenges for improving the collaboration between
science and industry to achieve this goal. Successful examples, best practices and lessons learned
will be presented from different perspectives and the role of funding agencies will be explained.


User Forum General Meeting


Workshop on exascale and PRACE prototypes

Prototypes

  • P1: On-die integrated CPU and GPU (PSNC) – Radek Januszewski
  • P2: Eurora (Cineca) – Carlo Cavazzoni
  • P3: Scalable Hybrid (CSC) – Sami Saarinen
  • P4: Mont-Blanc (BSC) – Alex Ramírez
  • P5: DEEP and DEEP-ER (JSC) – Estela Suárez

Alternative cooling technologies and heat re-use session

  • C1: Immersion cooling (PSNC) – Radek Januszewski
  • C2: Cold plate technology (CINECA) – Carlo Cavazzoni
  • C3: Direct hot-water cooling and heat re-use (LRZ) – Torsten Wilde
  • X1: Exascale integrated I/O subsystem (JSC) – Michael Stephan
  • X2: The case of ARM+GPU (BSC) – Filippo Mantovani
Share: Share on LinkedInTweet about this on TwitterShare on FacebookShare on Google+Email this to someone