PRACEdays15 Presentations

Welcome & Keynotes

Tuesday 26 May, 2015

WELCOME

Sergi Girona, Chair of the PRACE Board of Directors

Sergi Girona is Chair of the Board of Directors of PRACE, as well as Director of the Operations Department of the Barcelona Supercomputing Center (BSC). He belongs to the BoD of PRACE since its creation in 2010, and currently is both its Chair and Managing Director. He holds a PhD in Computer Science from the Technical University of Catalunya. In 2001, EASi Engineering was founded and Sergi became the Director of the company for Spain, and the R&D Director for the German headquarters. In 2004, he joined BSC for the installation of MareNostrum in Barcelona. MareNostrum was the largest supercomputer in Europe at that time, and it maintained this position for 3 years. Sergi was responsible for the site preparation and the coordination with IBM for the system installation. Currently, he is managing the Operations group with the responsibilities for User Support and System Administration of the different HPC systems at BSC.

FOSSILS, PHYSICS AND FAST COMPUTERS UNLOCKING A VIRTUAL PAST

William Sellers, Faculty of Life Sciences, The University of Manchester

William Sellers is a computational zoologist interested in the use of numerical techniques for investigating morphological, physiological and ecological factors in vertebrate evolution. He has a background in zoology and scientific industrial experience in computer modelling and image analysis. He runs the Animal Simulation Laboratory based at the University of Manchester with current external funding for two full time staff and two PhD students. In 2012, he was awarded a Japan Society for the Promotion of Science visiting fellowship to Kyoto University. He is a member of the NERC peer review panel and a member of the EPSRC grant review college. He is on the editorial board of Folia Primatologia and a fellow of the Higher Education Academy. He was Programme Director of Zoology at Manchester between 2009 and 2012. He has been reelected as a council member for the Primate Society of Great Britain for the 3rd time and recently completed a one year post as the President of the Anthropology and Archaeology Section of the British Science Association. In addition, he does considerable public engagement work including science festivals, museum days, and regular appearances on international television and radio. His research interests are: Evolution of Vertebrate Locomotion, Ancient Pigment Preservation, Biomechanics (both laboratory and field based), and Comparative Functional Anatomy. Dr Sellers has over 50 publications.

Abstract

The past is a fascinating place. It can tell us how we came to be like we are today, and it contains a huge range of bizarre creatures that are no longer alive. However since we do not yet have a suitable time machine, all our knowledge about the distant past comes from evidence preserved in the rocks around us. The most important source of evidence is from fossils. These are the preserved remains of animals and plants and they have been collected and studied by geologists for hundreds of years. However nowadays other disciplines want to get in on the fun. Engineers, physicists and computer scientists have developed techniques that help us find out more about fossil organisms. This talk will concentrate on what we can learn from studying the mechanics of fossil organisms using high performance computers. It will demonstrate the way early humans moved and what this tells us about the origins of moderns humans. It will also show how fast and how heavy the largest dinosaurs were and what this means about the way they lived. But most importantly it will explain how we can actually answer these questions scientifically and avoid some of the guesswork and exageration that has happened in the past.

pdficon

 

 

 

LISTENING TO BLACK HOLES WITH SUPER COMPUTERS

Sascha Husa, Relativity and Gravitation Group at the University of the Balearic Islands

Sascha Husa’s research is focused on numerical relativity and black hole physics, more specifically the modeling of sources of gravitational waves with high-performance computing, and he is currently an associate professor at the University of the Balearic Islands in Palma de Mallorca, Spain, which he joined as assistant professor in 2008. Husa is currently co-Principal Investigator (PI) of the UIB gravitational wave effort. Husa received his PhD from the University of Vienna (Austria) in 1998, and his habilitation in theoretical physics from the University of Jena (Germany) in 2006. He has worked as a postdoctoral research associate at the University of Vienna (1998), University of Pittsburgh (1998-2000), Max-Planck Institute for Gravitational Physics (2000-2005 and 2007-2008) and the University of Jena (2005-2007). Husa’s contributions to numerical relativity range from its mathematical foundations to binary black hole physics and the interface to gravitational wave data analysis, and he has co-authored more than 100 scientific publications, with more than 5800 citations and an h-Factor of 45. The main focus of his recent work has been the program of “phenomenological waveform modeling”, which was started in 2006 as a collaboration involving the Max Planck Institute for Gravitational Physics in Germany, the University of Jena and the University of the Balearic Islands. Taking this programme further, he is currently leading the BBHMAP project, a collaboration of about 20 scientists from Europe, India, South Africa and the USA to use the top level European supercomputing infrastructure to explore the parameter space of black hole binaries and develop analytical waveform models, and he has received a grant of 16.7 million CPU hours in the 3rd PRACE project call (2011-2012), and 37 million hours as a continuation project in the 5th PRACE Call for Proposals for Project Access call (2012-2013). Since 2006 Husa has given several invited talks at international conferences and summer schools per year, and he has co-organized a number of meetings in the field, most recently the Numerical Relativity-data analysis meeting 2013 in Mallorca and the extended programme “Dynamics of General Relativity” at the Erwin Schrödinger Institute for Mathematical Physics in Vienna in 2011 and 2012.

Abstract

One century after Einstein’s theory of general relativity has revealed space and time as dynamical entities a new generation of gravitational wave detectors is starting operation, and the first detection of gravitational wave events is expected to push open a new window on the universe within the next 5 years. The experimental challenge to meet the tremendous sensitivity requirements of GW detectors is paralleled by the computational modelling challenge to accurately predict the complicated dependence of the wave signals on the masses and spins of the black holes. In this talk will report on a program to explore the gravitational wave signatures of coalescing black holes by solving the Einstein equations with high order finite difference mesh refinement methods for judiciously chosen cases, and the synthesis of analytical models from our numerical data and perturbative results. These models are already used to analyse data from gravitational wave detectors and will help to identify the first such signals ever to be observed.

pdficon

 

 

 

HPC SIMULATION AT EDF ENABLING ENERGY CHALLENGES

Ange Caruso, Information Technologies Program Manager, Electricité de France R&D

Ange Caruso obtained his PhD in 1988, in the field of Energy Transfer and Combustion. From 1989 to 1998, he has worked in the field of CFD, developing softwares using finite elements and finite volumes methods. In 1999, he was project manager in order to numerically simulate the behavior of PWR vessels in EDF nuclear power plants, using CFD, thermal, structural mechanics and neutronics softwares. From 2001 to 2008, he was responsible for various study groups treating about subjects like nuclear accidents, thermal-hydraulics and chemical effects, mechanical behavior on nuclear components, fuel storage, and codes development. In 2008, he was Deputy Manager of Fluids Mechanics, Energy and Environment Department. Since 2012, Ange Caruso is the Information Technologies Program Manager at EDF R&D, driving projects on Advanced Simulation, Information and Communication Technologies, and Complex Systems Modeling.

Abstract

An industrial utility like EDF needs to better understand the behavior of energy infrastructures like power plants (nuclear, thermal, renewable,…), electrical networks, but also energy management. The objective is to increase safety, performance, lifetime, and optimize processes. To reach these goals, it is necessary to better understand various phenomena met inside the infrastructures, for example: nuclear components (containment building, PWR vessel, steam generator, fuel rods), networks (electrical grids) or energy management (quality of electricity), in order to win margins. This is done using various numerical softwares developed at EDF R&D. The use of HPC simulation allows new approaches and new perspectives. Some applications will be shown.

pdficon

Welcome & Keynotes

Wednesday 27 May, 2015

WELCOME FROM THE LOCAL HOST

Jean-Christophe (JC) Desplat, Director of the Irish Centre for High-End Computing (ICHEC)

Jean-Christophe (JC) Desplat is Director of the Irish Centre for High-End Computing (ICHEC) since 2012. He joined ICHEC in 2005 as Technical Manager and with his expertise and guidance, ICHEC is now one of the leading technology centres in Europe and a sought-after technology partner within industry and semi-state. Prior to joining ICHEC, JC spent ten years at the Edinburgh Parallel Computing Centre (EPCC) in the UK. There, he held a number of technical and European co-ordination roles, including pioneering work leading to the original proposal to establish the Distributed European Infrastructure for Supercomputing Applications (DEISA). JC is Honorary Professor of Computational Science at the Dublin Institute for Advanced Studies since 2008 and is Adjunct Professor at NUI Galway since 2012. He is a member of the UK EPSRC e-Infrastructure Strategic Advisory Team since 2011 and has served on a number of management and advisory bodies. These include the Digital Humanities Observatory Management Board (2008-2012), the Environmental Protection Agency (EPA) Climate Change Coordination Committee (2008-2013) and the ICT Sub-Committee of the Irish Medical Council (2011-2013). JC is one of the original authors of the e-INIS white paper describing a vision for the establishment of a National e-Infrastructure in Ireland, and a co-investigator of the €13M Higher Education Authority (HEA) PRTLI4-funded e-INIS project. He is also the Principal Investigator for a number of awards from Science Foundation Ireland (SFI), the HEA, the Dept. of Jobs, Enterprise & Innovation, Dept. of Education & Skills, the Environmental Protection Agency (EPA) and the European Commission FP7.
pdficon  videoicon

OPENING ADDRESS

Sanzio Bassini, Chair of the PRACE Council

In 1981 Sanzio Bassini was responsible for the scientific computing systems installed at CINECA. In 1984 he joined the Italian Supercomputer Project that introduced the first supercomputer of this class in Italy. In 1986 he was convenor of the Operating System Committee of the Cray User Group independent conference. In 1989 was responsible for the project to migrate the Consortium production environment towards UNIX. In 1992, he was appointed Team Leader of the CINECA Supercomputing Group. From 1992 to 1996 he has been member of the EC High Performance Computing & Networking EC committee. In 1996, he was appointed CINECA High Performance System Division Manager. From 2006 to 2009 he was CINECA Director of the System and Technology Department for the Development and Management of CINECA Information System. From 2010, through the delegation from the Italian Ministry of Education University and Research CINECA to represent Italy within the PRACE aisbl for the implementation of the European Supercomputing Research Infrastructure, he was appointed CINECA’s Director of the Supercomputing Application & Innovation Department for the development of technical and scientific computing service, innovation and technology transfer activities. In his position of Technical Director since 2006 he is a member of the CINECA Consortium Technical Committee. In his career he has been project leader of many European projects funded by the DG INFSO and by the DG Research and participated in many infrastructure projects in the area of information technology, networking and supercomputing. He is Chair of PRACE Council, since 3 June 2014.
pdficon

PRESENT STATUS OF RIST IN PROMOTION OF HIGH PERFORMANCE COMPUTING INFRASTRUCTURE IN JAPAN

Masahiro Seki, President of RIST, Japan

Masahiro Seki is currently the president of RIST (Research Organization for Information Science and Technology). RIST has been serving as the Registered Organization to promote shared use of ther K computer since 2012. He joined RIST in 2006 as the president. He was the Director General of Naka Fusion Research Establishment of Japan Atomic Energy Research Institute (JAERI) for the period of 2003-2006. He graduated from the University of Tokyo in 1969, then joined JAREI. He received his Ph.D. from the University of Tokyo in 1982. His research field includes thermal hydraulics, plasma facing materials and components of fusion reactors. He had supervised the Japanese engineering activities for ITER, which is now under construction in France as the joint project by the 7 parties of China, EU, India, Japan, Korea, France, and USA.

Abstract

High Performance Computing Infrastructure (HPCI) has been established in Japan as a platform for the integrated use of high performance computer systems including the K computer. HPCI currently integrates 12 systems to provide hierarchical capability, with the K computer as the flagship. Other supercomputers serve as the second layer systems, which play various unique roles. All the  computer systems are connected via a high speed network and are operated with the same policy to realize common operational features such as single-sign-on. The mission of RIST includes: (1) call for proposals, (2) screening and awarding and (3) user support. Roughly speaking, RIST in Japan is like PRACE in Europe. In his presentation, he will describe the evaluation process including recent results, supporting activities for shared use, promotion activities for industrial use and publication management.

pdficon

TOWARDS EXASCALE: THE GROWING PAINS OF INDUSTRY STRENGTH CAE SOFTWARE

Lee Margetts, Research Computing Services, The University of Manchester

Lee Margetts is an expert in large-scale computational engineering. He has more than 15 years experience in HPC and started his career as a consultant in the UK National HPC Service, CSAR (1998-2008). Lee currently holds various posts at the University of Manchester, is a Visiting Research Fellow at the Oxford eResearch Centre, University of Oxford and an Affiliate Research Fellow at the Colorado School of Mines, USA. He leads the open source parallel finite element analysis project ParaFEM and is author of the accompanying text book, “Programming the Finite Element Method”. He is an investigator on the EU FP7 European Exascale Software Initiative and his ambition is for ParaFEM to be one of the first engineering applications with Exascale capability. Lee has a particular interest in HPC technology transfer between academia and industry, holding an MBA with distinction in International Engineering Business Management. He contributes to international activities through his roles as Chairman of the NAFEMS HPC Technical Working Group; elected member of the PRACE Industrial Advisory Committee and academic lead on EPSRC’s UK-USA HPC Network.

Abstract

In the Exascale community, there is some concern that commercial computer aided engineering (CAE) software will not be ready to take advantage of Exascale systems when they eventually come online. This talk will consider this issue from three perspectives: (i) Industry end users whose business will benefit from early access to Exascale computers; (ii) Independent software vendors who develop and market engineering software and (iii) Open source software initiatives led by Universities and government laboratories. Each of these stakeholders has a unique set of needs and motivational drivers that, if linked together in a simple and elegant way, can lead to the development, use and exploitation of CAE software on Exascale systems. The Lee Margetts will draw upon academic experience as leader of an open source software project and business insight through roles at NAFEMS and PRACE to set out a possible roadmap towards Exascale CAE.
pdficon

IMPLEMENTING THE EUROPEAN STRATEGY ON HIGH PERFORMANCE COMPUTING

Augusto Burgueño Arjona, European Commission, DG Communications Networks, Content and Technology

Augusto Burgueño Arjona is currently Head of Unit “eInfrastructure” at European Commission Directorate General for Communications Networks, Content and Technology. His unit coordinates the implementation of the European HPC strategy as well as the deployment of European research eInfrastructures such as Géant, PRACE, EUDAT, OpenAIRE and the European Grid Initiatiave (EGI). Previously he served as Head of Unit “Finance” Directorate General for Communications Networks, Content and Technology at European Commission and Head of inter-Directorate General Task Force IT Planning Office at European Commission.

Abstract

HPC is a strategic tool to transform big scientific, industrial and societal challenges into innovation and business opportunities. HPC is essential for modern scientific advances (e.g. understanding the human brain or climate change) as well as for industry to innovate in products and services. “Traditional” areas like manufacturing, oil&gas, pharmaceutical industry etc. consider HPC indispensable for innovation, but also emerging applications (like smart cities, personalized medicine, or cosmetics) benefit from the use of HPC and its convergence with Big Data and clouds. The most advanced countries in the world recognise this strategic role of HPC and have announced ambitious plans for building exascale technology and deploying state-of-the-art supercomputers in the following years. Europe has the technological know-how and market size to play a leading role in all areas: HPC technologies and systems, services and applications. The European HPC Strategy in Horizon 2020 combines three elements in an integrated and synergetic way: (a) developing the next generations of HPC towards exascale; (b) providing access to the best facilities and services for both industry and academia; and (c) achieving excellence in applications. The Commission has taken several ambitious steps to support this strategy, such as the establishment of a contractual Public Private Partnership (PPP) on HPC. Support to the HPC Strategy is expected to continue in the future Horizon 2020 work programmes.

pdficon

Scientific Sessions

Wednesday 27 May, 2015

EUROPEAN RESEARCH COUNCIL PROJECTS

Chair: Kenneth Ruud, Artic University of Norway

Kenneth Ruud received his PhD in theoretical chemistry from the University of Oslo in 1998. He spent two years as a postdoc at the University of California, San Diego/San Diego Supercomputer Centre (USA), before moving to Tromsø (Norway) in 2001. Since 2002 he has been a professor of theoretical chemistry at the University of Tromsø – The Arctic University of Norway, and he has published more than 260 papers in the field of computational chemistry. His main research interests are in the development and application of quantum-mechanical methods for understanding the interaction between molecules and electromagnetic fields, with a particular focus on molecular spectroscopy. He holds since 2011 an ERC Starting Grant entitled “SURFSPEC – Theoretical Spectroscopy of Surfaces and Interfaces”. He is a member of the Centre of Theoretical and Computational Chemistry, a Norwegian Centre of Excellence. He has since 2010 been a member of the PRACE Scientific Steering Committee. 2011-2012 he was the Chairman of the PRACE Access Committee, and 2013-2014 the Chairman of the PRACE Scientific Steering Committee.

COMPUTATIONAL CHALLENGES IN SKELETAL TISSUE ENGINEERING

Liesbet Geris, Biomechanics Research Unit, University of Liège

Liesbet Geris is professor in Biomechanics and Computational Tissue Engineering at the Department of Aerospace and Mechanical Engineering at the university of Liège and part-time associate professor at the Department of Mechanical Engineering of the KU Leuven, Belgium. From the KU Leuven, she received her MSc degree in Mechanical Engineering in 2002 and her PhD degree in Engineering in 2007, both summa cum laude. In 2007 she worked as a postdoctoral researcher at the Centre of Mathematical Biology of Oxford University. Her research interests encompass the mathematical modeling of bone regeneration during fracture healing, implant osseointegration and tissue engineering applications. The phenomena described in these mathematical models reach from the tissue level, over the cell level, down to the molecular level. She works in close collaboration with experimental and clinical researchers from the university hospitals Leuven, focusing on the development of mathematical models of impaired healing situations and the in silico design of novel treatment strategies. She is scientific coordinator of Prometheus, the skeletal tissue engineering division of the KU Leuven. Her research is financed by European, regional and university funding. In 2011 she was awarded an ERC starting grant to pursue her research. Liesbet Geris is the author of 43 ISI indexed journal papers, 8 book chapters and over 80 full conference proceedings and abstracts.

Abstract

Tissue engineering (TE), the interdisciplinary field combining biomedical and engineering sciences in the search for functional man-made organ replacements, has key issues with the quantity and quality of the generated products. Protocols followed in the lab are mainly trial and error based, requiring a huge amount of manual interventions and lacking clear early time-point quality criteria to guide the process. As a result, these processes are very hard to scale up to industrial production levels. In many engineering sectors, in silico modeling is used as an inherent part of the R&D process. In this talk I will discuss a number of (compute intensive) examples demonstrating the contribution of in silico modeling to the bone tissue engineering process. A first example that will be discussed is the simulation of bioreactor processes. Currently, only a limited number of online read-outs is available which can be used to monitor and control the biological processes taking place inside the bioreactor. We developed a computational model of neotissue growth inside the bioreactor that, in combination with the experimental read-outs, allow for a quantification of the processes taking place inside the bioreactor. Scaffold geometry (curvature-based growth), fluid flow (Brinkman equation) and nutrient supply were simulated to affect the growth rate of the neotissue. The model captured the experimentally observed growth patterns qualitatively and quantitatively. Additionally, the model was able to calculate the micro-environmental cues (mechanical and nutrient-related) that cells experience both at the neotissue-free flow interface and inside the neotissue. The second example pertains to the assessment of the in vivo bone regeneration process. As normal fractures lead to successful healing in 90-95% of the cases, people in need of tissue engineering solutions often suffer from severe trauma, genetic disorders or comorbidities. One of these genetic disorders impacting the bone regeneration process is neurofibromatosis type I. Starting from an established computational model of bone regeneration, we examined the effect of the NF1 mutation on bone fracture healing by altering the parameter values of eight key factors which describe the aberrant cellular behavior of NF1 affected cells. We show that the computational model is able to predict the formation of a non-union and captures the wide variety of nonunion phenotypes observed in patients. A sensitivity analysis by “Design of Experiments” was used to identify
the key contributors to the severity of the non-union.

Unfortunately Liesbet Geris could not be with us at PRACEdays15 due to a power-cut at Brussels Airport.

HPC FOR COMBUSTION INSTABILITIES IN GAS TURBINES: THE ERC INTECOCIS PROJECT IN TOULOUSE

Gabriel Staffelbach, CERFACS

Gabriel Staffelbach is a senior researcher at Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique (CERFACS). He has been working on numerical simulation of combustion and high performance computing since 2002 and is an active user of most HPC systems available to the scientific community. His expertise ranges from numerical simulation to computational science as well
as combustion instabilities.

Abstract

Combustion accounts for 80% of the worlds energy. Ever progressing trends in design and research have and continue to yield spectacular changes for turbines, cars, rocket propulsion etc.. The joint ERC project INTECOCIS coordinated by CNRS (DR14 Midi Pyrénées) and lead by two research centres, Institut de Mécanique des Fluides de Toulouse and CERFACS aims at introducing recent progress in the field of High Performance Computing (HPC) for combustion simulation into studies of Combustion Instabilities. The project integrates experimental and industrial applications to build and validate tools built to predict combustion instabilities using modern high performance computing. This presentation will highlight the recent progress of the project.

pdficon

 

 

 

RUNTIME AWARE ARCHITECTURES

Mateo Valero, Director at Barcelona Supercomputing Center (BSC)

Mateo Valero, is a professor in the Computer Architecture Department at UPC, in Barcelona. His research interests focuses on high performance architectures. He has published approximately 600 papers, has served in the organization of more than 300 International Conferences and he has given more than 400 invited talks. He is the director of the Barcelona Supercomputing Centre, the National Centre of Supercomputing in Spain. Dr. Valero has been honoured with several awards. Among them, the Eckert-Mauchly Award, Harry Goode Award, The ACM Distinguished Service award, the “King Jaime I” in research and two Spanish National Awards on Informatics and on Engineering. He has been named Honorary Doctor by the Universities of Chalmers, Belgrade and Veracruz in Mexico and by the Spanish Universities of Las Palmas de Gran Canaria, Zaragoza and Complutense in Madrid. “Hall of the Fame” member of the IST European Program (selected as one of the 25 most influential European researchers in IT during the period 1983-2008, in Lyon, November 2008). Professor Valero is Academic member of the Royal Spanish Academy of Engineering, of the Royal Spanish Academy of Doctors, of the Academia Europaea, and of the Academy of Sciences in Mexico, and Correspondant Academic of the Spanish Royal Academy of Science, He is a Fellow of the IEEE, Fellow of the ACM and an Intel Distinguished Research Fellow.

Abstract

In the last few years, the traditional ways to keep the increase of hardware performance to the rate predicted by the Moore’s Law have vanished. When uni-cores were the norm, hardware design was decoupled from the software stack thanks to a well defined Instruction Set Architecture (ISA). This simple interface allowed developing applications without worrying too much about the underlying hardware, while hardware designers were able to aggressively exploit instruction-level parallelism (ILP) in superscalar processors. With the eruption of multi-cores and parallel applications, this simple interface started to leak. As a consequence, the role of decoupling again applications from the hardware was moved to the runtime system. Efficiently using the underlying hardware from this runtime without exposing its complexities to the application has been the target of very active and prolific research in the last years. Current multi-cores are designed as simple symmetric multiprocessors (SMP) on a chip. However, we believe that this is not enough to overcome all the problems that multi-cores already have to face. It is our position that the runtime has to drive the design of future multicores to overcome the restrictions in terms of power, memory, programmability and resilience that multi-cores have. In this talk, we introduce a first approach towards a Runtime-Aware Architecture (RAA), a massively parallel architecture designed from the runtime’s perspective.

HOT LATTICE QUANTUM CHROMODYNAMICS

Chair: Sándor Katz, Institute for Theoretical Physics, Eötvös Loránd University, Budapest

Sándor Katz is a professor of physics, head of the Institute for Theoretical Physics at Eötvös Loránd University, Budapest. His research is focused on lattice calculations of Quantum Chromodynamics, the theory of strong interactions. Together with colleagues in the same group and in collaboration with other institutions they try to inderstand the properties of hot strongly interacting matter, such as the phase transition of hadrons to quark-gluon plasma and its equation of state.

SIMULATION OF STRONGLY INTERACTING MATTER FROM LOW TO HIGH TEMPERATURES

Stefan Krieg, Forschungszentrum Jülich, Germany

Stefan Krieg received his PhD in physics from Wuppertal University. He held Post-Doctoral positions at Forschungszentrum Jülich, Wuppertal University, and MIT. He is currently responsible for the Simulation Laboratory Nuclear and Particle Physics at Jülich Supercomputing Centre, Forschungszentrum Jülich. His research is focussed on Lattice Quantum Chromodynamics.

Abstract

The rapid transition from the quark-gluon-plasma ‘phase’ to the hadronic phase in the early universe and the QCD phase diagram are subjects of intense study in present heavy-ion experiments (LHC@CERN, RHIC@BNL, and the upcoming FAIR@GSI). This transition can be studied in a systematic way in Lattice QCD. We report on a continuum extrapolated result for the equation of state (EoS) of QCD with and without dynamical charm degree of freedom. With these results, we will be able to close the gap between the low temperature region, which can be described by the hadron resonance gas model, and the high temperature region, which can be described by (hard thermal loop) perturbation theory. For all our final results, the systematics are controlled, quark masses are set to their physical values, and the continuum limit is taken using at least three lattice spacings.

FROM QUARK NUMBER FLUCTUATIONS TO THE QCD PHASE DIAGRAM

Christian Schmidt, University of Bielefeld

Schmidt received his PhD from Bielefeld University in 2003. After research appointments at the University of Wuppertal, the Brookhaven National Laboratory and the Frankfurt Institute for Advanced Studies, he is now back in Bielefeld, where his is currently holding a senior researcher position. He is an expert on the field of QCD simulations at nonzero temperature and density and has pioneered investigations of the QCD phase diagram through Taylor expansion. As a member of two large international collaborations, the HotQCD and the BNL-Bielefeld-CCNU.

Abstract

For the first few micro-seconds after the Big Bang, the universe was filled with a plasma of strongly interacting quarks and gluons, the QGP. Today, small droplets of QGP are created in heavy ion experiments. Recently, large experimental effort was undertaken to explore the phase diagram of QCD through a beam energy scan program of heavy ion collisions. We will review Lattice QCD computations of conserved charge fluctuations that are performed in order to make contact with these experiments. We then show that a comparison of fluctuations of conserved hadroinc charged from lattice QCD with experimental results allows to position the so called freeze-out points on the QCD phase diagram. Computational challenges, that boil down to a tremendous amount of inversion of large sparse matrices will
be highlighted. Here, the method of choice is the iterative conjugate gradient solver, which in our case is bandwidth limited. On GPUs, e.g., we approach this problem by exposing more parallelism to the accelerator through inverting multiple right hand sides at the same time.

LATTICE SIMULATIONS OF STRONG INTERACTIONS IN BACKGROUND FIELDS

Massimo D’Elia, Chariman, University of Pisa & INFN, Italy

Massimo D’Elia received his degree in Physics from Scuola Normale Superiore and the University of Pisa, and his PhD in Physics from the University of Pisa in 1994 and 1998, respectively. Associate Researcher at the University of Cyprus and at ETH Zurich in 1997 and 1998, Postdoc Fellow at the University of Pisa in 1999, Assistant Professor in Theoretical Physics at the University of Genoa (2000-2011). Since 2011 he is Associate Professor in Theoretical Physics at the Physics Department of the University of Pisa. He is author of 170 publications in refereed journals and conference proceedings. His research is dedicated to the study of fundamental interactions, with a focus on the investigation of the properties of strongly interacting matter in extreme conditions by lattice QCD simulations.

Abstract

The study of strong interactions in the presence of external sources, such as electromagnetic background fields or chemical potentials, offers the possibility to investigate the properties of strongly interacting matter in unusual conditions, which may be relevant to many contexts, going from heavy ion experiments to the physics of the Early Universe. Due to the non-perturbative nature of the problem, numerical lattice simulations are the ideal tool to obtain answers based on the first principles of Quantum Chromodynamics (QCD), which is the theory of strong interactions. The last few years have seen a considerable and steady progress in the field. Because of the extremely high computational needs of the problem, this has been possible also due to a matching development in HPC infrastructures. In this talk I will review such progress, with a focus on results regarding the physics of QCD in strong magnetic fields.

pdficon

 

 

COMPUTATIONAL DYNAMICS

Chair: Petros Koumoutsakos, Computational Science Lab, ETH Zürich, Switzerland

Petros Koumoutsakos holds the Chair of Computational Science at ETH Zurich. He received his Diploma (1986, National Technical University of Athens) and Master’s (1987, University of Michigan, Ann Arbor) in Naval Architecture. He received a Master’s (1988) and PhD in Aeronautics and Applied Mathematics (1992) from the California Institute of Technology. He was an NSF fellow in parallel computing (1992-1994, Center for Research on Parallel Computation) at the California Institute of Technology and a research associate (1994-1997) with the Center for Turbulence Research at NASA Ames/Stanford University. He was an assistant professor of Computational Fluid Dynamics (1997-2000) at ETH Zurich and the founding director of the ETH Zurich Computational Laboratory (2000-2007). He was full professor of Computational Science between 2000-2011 in the Department of Computer Science at ETH Zurich and Director of the Institute of Computational Science (2001-2005). Petros Koumoutsakos has published 1 monograph, 3 edited volumes, 8 book chapters and over 170 peer reviewed articles. He is an elected Fellow of the American Physical Society and Fellow of the American Society of Mechanical Engineers. He is also recipient of the Advanced Investigator Award by the European Research Council (2013) and of the ACM Gordon Bell award in 2013.

COMPUTATIONAL CHALLENGES OF FAST DYNAMICS OF FLOWS WITH PHASE INTERFACES FOR BIOMEDICAL APPLICATIONS

Nikolaus Adams, Lehrstuhl für Aerodynamik und Strömungsmechanik - TU München, Germany

Nikolaus Adams recieved a Doctorate in Mechanical Engineering, from the Technische Universität München, Germany. Since 2004, Chair of Aerodynamics and Fluid Mechanics at the Technische Universität München. He has nearly 300 publications and was recipient of the Gordon Bell prize 2013. His contributions are in flow physics, modelling and simulation of multi-scale flows and complex fluids and the dissemination of fundamental research into applications. Flow physics: Shock-turbulence interaction, shock-interface interaction, Richtmyer-Meshkov instabilities, cavitating flows, phase transition, real-gas mixing and combustion, fluid-structure interaction, contact and interface phenomena Modelling and Simulation: High resolution methods, large-eddy simulation models, physically consistent scale-separation models, smoothedparticle hydrodynamics, interface tracking and capturing methods, weakly-compressible approaches, largescale simulations Application: Cavitation and erosion prediction, nanoparticle production, microfluidic generation of micro-droplets, chemical propulsion, buffeting in propulsion systems, biofluid mechanics, morphing structures for small aircraft and wind energy, automotive aerodynamics.

Abstract

The simulation of two-phase flows with compressibility effects and turbulence is one of the current challenges for modern numerical models in predictive simulations. Different approaches promise the most efficient way to solution for different application scenarios. The interaction of phase interfaces with shock waves and the generation of shock waves by rapid phase change are essential flow phenomena for biomedical applications. In this talk we present recent developments in modeling and simulation of compressible flows with interfaces, address efficient computational approaches for interface tracking, multi-resolution approaches, and new physically motivated approaches for dynamic load balancing.

HIGH ORDER, SCALE RESOLVING MODELLING FOR HIGH REYNOLDS NUMBER RACING CAR AERODYNAMICS

Spencer Sherwin, Faculty of Engineering, Department of Aeronautics at Imperial College London

Spencer Sherwin is the McLaren Racing/Royal Academy of Engineering Research Chair in the Department of Aeronautics at Imperial College London. He received his MSE and PhD from the Department of Mechanical and Aerospace Engineering Department at Princeton University. During his time at Imperial he has maintained a successful research program into the development and application of the high order spectral/hp element techniques with particular application to separated unsteady aerodynamics, biomedical flow and understanding flow physics through instability analysis. Sherwin’s research group also develops and distributes the openware spectral/hp element package Nektar++ which has been applied to direct numerical simulation and stability analysis to a range of applications including vortex flows of relevance to offshore engineering and vehicle aerodynamics and biomedical flows associated with arterial atherosclerosis. He has published over 120 peer-reviewed papers in international journals covering topics from numerical analysis to applied and fundamental fluid mechanics and co-authored a highly cited book on the spectral/hp element method. Currently he is an associate director of the EPSRC/Airbus funded Laminar Flow Control Centre and is the chair of the EPSRC Platform for Research in Simulation Methods (PRISM) at Imperial College London.

Abstract

The use of computational tools in industrial flow simulations is well established. As engineering design continues to evolve and become ever more complex there is an increasing demand for more accurate transient flow simulations. It can, using existing methods, be extremely costly in computational terms to achieve sufficient accuracy in these simulations. Accordingly, advanced engineering industries, such as the Formula 1 (F1) industry, is looking to academia to develop the next generation of techniques which may provide a mechanism for more accurate simulations without excessive increases in cost. Currently, the most established methods for industrial flow simulations, including F1, are based upon the Reynolds Averaged Navier-Stokes (RANS) equations which are at the heart of most commercial codes. There is naturally an implicit assumption in this approach of a steady state solution. In practice, however, many industrial problems involve unsteady or transient flows which the RANS techniques are not well equipped to deal with. In order to therefore address increasing demand for more physical models in engineering design, commercial codes do include unsteady extensions such as URANS (Unsteady RANS), and Direct Eddy Simulation (DES). Unfortunately even on high performance computing facilities these types of computational models require significantly more execution time which, to date, has not been matched with a corresponding increase in accuracy of a level sufficient to justify this costs. Particularly when considering the computing restrictions the F1 rules impose on the race car design. Alternative high order transient simulation techniques using spectral/hp element discretisations have been developed within research and academic communities over the past few decades. These methods have generally been applied to more academic transient flow simulations with a significantly reduced level of turbulence modelling. As the industrial demand for transient simulations becomes greater and the computer “power per $” improves, alternative computational techniques such as high order spectral/hp element discretisations, not yet widely adopted by industry, are likely to provide a more cost effective tool from the perspective of computational time for a high level of accuracy. In this presentation we will outline the demands imposed on computational aerodynamics within the highly competitive F1 race car design and discuss the next generation of transient flow modelling that the industry is looking to impact on this design cycle.

pdficon

 

 

 

 

SALVINIA-INSPIRED SURFACES IN ACTION

Carlo Massimo Casciola, University of Rome “La Sapienza”

Carlo Massimo Casciola is presently full professor of Fluid Dynamics at the Mechanical and Aerospace Engineering Department of Sapienza University of Rome. He leads a research group working on the Fluid Dynamics of complex flows based at the Mechanical and Aerospace Department of Sapienza University of Rome. The modus operandi of the group is chiefly theoretical and numerical, oriented to fundamental and numerical modeling. This approach brought the group members to collaborate with scientists belonging to several neighboring disciplines, such as mathematics, physics, material science, chemistry, and biology. The issuing multidisciplinary and multi scale expertise has already proved successful in dealing with such diverse problems as Aerodynamics, Turbulence, Combustion, Drag reduction, Particle Transport, Multiphase Flows, and Interfacial Phenomena like wetting, liquid slippage, and heterogeneous bubble nucleation. After the ERC-Advanced Grant 2013, BIC: Following Bubble form inception to collapse most activity focused on different fundamental aspects of cavitation.

Abstract

Surfaces exhibiting extraordinary features exist in nature. A remarkable example is the Salvinia molesta. This water fern, due the presence of small hydrophilic patches on top of rough hydrophobic surfaces, is able to retain air pockets when submerged by stabilizing the resulting Cassie state against positive pressure fluctuations while, at the same time, preventing bubble detachment. A similar strategy is adopted by certain insects and spiders (e.g. Elmis Maugei and Dolomedes triton) to breath underwater, thanks to a stabilized air layer, the so-called plastron. However, since CWT is a rare event beyond reach of brute force computations, the mechanism of wetting remains elusive and it is still difficult, if not impossible, to predict the transition from the Cassie to the fully wet Wenzel state. Using specialized techniques, it has been recently demonstrated that molecular dynamics is indeed capable to describe the Cassie-Wenzel transition on a simple model system. However, going beyond this proof-of-concept simulations, with the goal of reproducing real hydrophobic coatings and the complex morphology of natural surfaces, requires to combine a smart theoretical approach with a boost in computational resources. We discuss here the results of massively parallel simulations on top-notch machines combined with advanced statistical mechanics techniques aimed at mimicking the Salvinia leaves and revealing its strategies for airtrapping. As will be shown, the results, obtained by exploiting the full potentialities of the Tier-0 computer architectures made available through the the WETMD project allocated by PRACE, have the potential to inspire next generation, biomimetic, superhydrophobic surfaces, as well as to provide benchmarks for continuum models of wetting and cavitation.

MOLECULAR SIMULATIONS

Chair: Ilpo Vattulainen, Department of Physics at the Tampere University of Technology, Finland

Ilpo Vattulainen is working as a professor at the Department of Physics, Tampere University of Technology (Finland). He is the director of the Biological Physics group (comprised of about 45 people), which focuses on molecular-scale simulations of biological systems, with a focus on lipids, proteins, and carbohydrates associated with cell membranes. He is the vice-chair in the Center of Excellence in Biomembrane Research chosen by the Academy of Finland for 2014-2019. He is also the principal investigator in an ERC Advanced Grant project, acts as the chair of the Customer Panel at the CSC – IT Center for Science (Espoo, Finland), and is a member of the Executive Committee of the European Biophysical Societies’ Association. His group has been an active member of PRACE, CECAM, and other computational activities.

EFFICIENT LENNARD-JONES LATTICE SUMMATION TECHNIQUES FOR LIPID BILAYERS

Erik Lindahl, Department of Biochemistry and Biophysics at the University Stockholm

Erik Lindahl holds a PhD in Theoretical Biophysics, from KTH, Stockholm, Sweden. He is currently Professor of Biophysics, in the Dept. Biochemistry & Biophysics, Stockholm University and Professor of Theoretical Biophysics, at KTH Royal Institute of Technology. He was appointed Senior Research Fellow of the Swedish Research Council on Bioinformatics. He was and is principal investigator in numerous national end European research projects. He serves on the board of directors for the Swedish National Infrastructure for Computing leadership. He is the vice director, Swedish e-Science Research Center and a member of the PRACE Scientific Steering Committee. Erik Lindahl authored over 85 scientific publications.

Abstract

The introduction of particle-mesh Ewald (PME) lattice summation for electrostatics 20 years ago was a revolution for membrane simulations. It got rid of horrible cutoff effects, and removed the electrostatics cutoff’s influence on important properties such as area and volume per lipid. However, over the last decade it has become increasingly obvious that the Lennard-Jones cutoff is also highly problematic. Dispersion corrections are not sufficient for membranes that are neither isotropic nor homogenous – altering the cutoff will still alter properties. Here I will present a new highly efficient and parallel technique for LJPME that is part of GROMACS version 5. We have solved the historical problem with Lorentz-Berthelot combination rules in lattice summation by introducing a series of approximations, first by using geometric combination properties in reciprocal space, and now also correcting for this difference in direct space terms. Not only does this improve molecular simulation accuracy by almost an order of magnitude, but it also achieves absolute LJPME simulation performance that is an order of magnitude faster than alternatives – in many cases it is within 10% of the previous cutoff performance in GROMACS.

ON THE ACTIVATION AND MODULATION OF VOLTAGE GATED ION CHANNELS

Mounir Tarek, CNRS & Université de Lorraine

Tarek is a Director of Research (DR2) CNRS, UMR 7565, CNRS-University of Lorraine, Vice-Chair of the department SRSMC (Structure and Reactivity of Complex Molecular Systems) with over 50 scientists. He is Member of the Steering committee of the Doctoral school “SESAMES”, Coordinator of the Physics Section of the ERASMUS exchange program for the University of Lorraine, and member of the Scientific Management Committee of the International Associated Laboratory EBAM that regroups eight major labs from France and Slovenia focussing on Electroporation based technologies and Treatments. His research involves the use of computational methods to study membranes, proteins, ion channels and membrane transport proteins. The overall aim is to understand the relationship between (dynamic) structure and physiological function of membrane proteins. We are using molecular dynamics approaches to explore the conformational dynamics of proteins, and to relate their dynamical properties to biological function, and methods of structure prediction for proteins for which high-resolution structural data remain undetermined. We have expertise in rigorous atomistic MD simulations of potassium channels, and in particular of the voltage gated ones. Over the last 8 years, we studied many aspects of these channel properties, among which the effect of voltage sensor domain mutations and modulation by lipids, on the function of the channel. We have recently been allocated 140 million core hours by PRACE to study voltage gated channels kinetics.

Abstract

Excitable cells produce electrochemical impulses mediated by the transport of ions across their membrane through proteins pores called ion channels. The most important family of channels propagating an electrical signal along the cell surface is the voltage-gated ion channel (VGCs) family. VGCs are essential physiological effectors: they control cellular excitability and epithelial transport. A myriad of genetic mutations found in the genes encoding their subunits cause channel malfunction. These so-called channelopathies have been incriminated in a variety of diseases, including, among others, epilepsy, pain syndromes, migraines, periodic paralyses, cardiac arrhythmias, hypertension and hypotension. Contemporary research will benefit from new insights into the minute molecular details in play which can contribute to a fine understanding of VGCs function, as well as its modulation by the environment or its disruption by specific mutations. The working cycle of VGCs involves the complex conformational change of modular protein units called voltage sensor domains (VSDs). For over forty years, these rearrangements have been recorded as “gating” currents, intensities and kinetics of which are unique signatures of VGC function. In this Talk we show that the atomistic description of VSD activation obtained by molecular dynamics simulations and free energy calculations is consistent with the phenomenological models adopted so far to account for the macroscopic observables
measured by electrophysiology. Most importantly, by providing a connection between microscopic and macroscopic dynamics, our results pave the way for a deeper understanding of the molecular level factors affecting VSD activation, such as lipid composition, amino acid mutations, and binding of drug molecules or endogenous ligands.

EFFECT OF HYDROPHOBIC POLLUTANTS ON THE LATERAL ORGANIZATION OF BIOLOGICAL MODEL MEMBRANES

Luca Monticelli, Institut de Biologie et Chimie des Protéines CNRS

Luca Monticelli received a PhD in Chemistry from the University of Padova, Italy. He is at present Senior Researcher at the Institut de Biologie et Chimie des Protéines (CNRS, UMR 5086), Lyon, France. His main interest is in membrane biophysics and in the interaction between biological membranes and nano-sized particles. In particular, he is interested in understanding how biological macromolecules (peptides, proteins) and manmade materials (carbon nanoparticles, industrial polymers, common pollutants) enter biological membranes and perturb their structure, dynamics, and function. The main tools in his research are molecular simulations at different levels, from ab initio to atomistic and coarse-grained models. His research is coupled to the development of theoretical and computational methodologies for the study of complex biological systems, in the spirit of multi-scale modeling. Since 2005 he collaborates at the development of the MARTINI coarse-grained force field, which has emerged as one of the most powerful and most widely used model for studying large-scale behavior of biological macromolecules. Dr. Monticelli is the author of over 50 peer-reviewed papers.

Abstract

Cell membranes have a complex lateral organization featuring domains with distinct composition, also known as rafts, which play an essential role in cellular processes such as signal transduction and protein trafficking. In vivo, perturbation of membrane domains (e.g., by drugs or lipophilic compounds) has major effects on the activity of raft-associated proteins and on signaling pathways. In live cells, membrane domains are difficult to characterize because of their small size and highly dynamic nature, so model membranes are often used to understand the driving forces of membrane lateral organization. Studies in model membranes have shown that some lipophilic compounds can alter membrane domains, but it is not clear which chemical and physical properties determine domain perturbation. The mechanisms of domain stabilization and destabilization are also unknown. Here we describe the effect of six simple hydrophobic compounds on the lateral organization of phase -separated model membranes consisting of saturated and unsaturated phospholipids and cholesterol. Using molecular simulations, we identify two groups of molecules with distinct behavior: aliphatic compounds promote lipid mixing by distributing at the interface between liquid-ordered and liquid-disordered domains; aromatic compounds, instead, stabilize phase separation by partitioning into liquid-disordered domains and excluding cholesterol from the disordered domains. We predict that relatively small concentrations of hydrophobic species can have a broad impact on domain stability in model systems, which suggests possible mechanisms of action for hydrophobic compounds in vivo.

HPC in Industry

Wednesday 27 May, 2015

HPC IN INDUSTRY IN IRELAND

Chair: Leo Clancy, Division Manager ITC, IAD Ireland

Leo heads IDA Ireland’s Technology, Consumer and Business Services. IDA’s role is to market Ireland to multi-nation investors and to support established investors in Ireland. Prior to joining IDA Leo worked in the telecommunications industry, spending 13 years with Ericsson in engineering and management roles. This was followed by more than four years leading the technology function for an Irish fibre communications company. Leo holds a degree in Electronics Engineering from Dublin Institute of Technology.

SUBSURFACE IMAGING OF THE EARTH FOR EXPLORATION: METHODS AND HPC NEEDS

Sean Delaney, Tullow Oil

Seán Delaney is a computational physicist at Tullow Oil in Dublin, specialising in advanced imaging techniques, code development, and optimisation. Previously, he obtained his PhD. in Astrophysics from the Dublin Institute for Advanced Studies, using computational methods to investigate particle acceleration in relativistic shocks. Subsequently, Seán worked for the Irish Centre for High End Computing on a range of HPC topics, including visualisation, GPU programming, finite difference methods and others. www.linkedin.com/in/seandelaneyphd

Abstract

Imaging the subsurface of the Earth is a challenging task which is fundemental in determining resource location and management. Seismic wave propagation through the Earth is a key tool in geophysics and one of the best available methods for imaging the subsurface and studying physical processes in the Earth. Seismic imaging has moved from ray based imaging operators to numerical solutions to wave equations, usually termed full wavefield imaging. This methodology requires high performance computational (HPC) resources. The most recently implemented tools for imaging are Reverse Time Migration (RTM) and Full Wavefield Inversion (FWI). RTM imaging has been shown to be highly beneficial in imaging in complex geological regions whilst FWI has been used to develop high resolution velocity models, which leads to better subsurface images. The practical implementation needs expertise in imaging and in running HPC applications to reduce the overheads, not just in CPU costs but in turn-around times for quicker business decisions. As more expensive imaging tools are continually being developed, the size of the data being recorded in the field has approached the petabyte scale per survey. Thus, there is a continuing need for HPC, not just on cutting edge imaging algorithms but for updating traditional imaging codes. This presentation will discuss seismic imaging trends from a non-specialist point of view with focus on industry applications. The overall message being, the seismic exploration industry is still pushing the upper barrier of computational geophysics on HPC resources for processing and imaging methods. To extract the maximum benefit and throughput and minimise costs in CPU spend, a combination of cutting edge tools, advanced imaging specialists and geophysicists/physicists highly skilled in HPC are required. For more information, see www.tullowoil.com

THE DNA DATA DELUGE-A PARALLEL COMPUTING APPROACH

Bendan Lawlor, nSilico

Brendan Lawlor is NSilico’s team leader in parallel computing and is a graduate of University College Cork and the Cork Institute of Technology. He is a practising software engineer with 25 years of experience in building enterprise systems in a variety of sectors and using a variety of platforms and technologies. His current roles include providing Software Process and Architecture services to commercial software companies. In this capacity he designs and maintains software development infrastructure, he architects solutions for bespoke software systems, and executes those solutions using C++ and Java, and more recently Scala. He is also working towards a doctorate in Bioinformatics, with a view to putting the software engineering values he has acquired over his career to the service of this new and exciting field, and at the same time developing new skills in High Performance Computing and Big Data processing.

Abstract

A Formula 1 engine is powerful, highly engineered and capable of tremendous numbers of revolutions per second. To win races with such an engine, it’s necessary to house it in a suitable chassis, and make sure that it gets fuel quickly. Similarly, a fast low-level algorithm, developed with a deep understanding of the target processor, needs to be correctly housed and fed in order to convert that sheer power into a useful solution. NSilico is an Irish based SME that develops software to the life sciences sector, providing bioinformatics and medical informatics systems to a range of clients. Processing the exponentially growing amount of genomic sequence data is a major challenge to those clients. The Formula 1 engine in question is a SIMD C-language implementation of the Smith-Waterman algorithm, presented as a project of the PRACE SHAPE programme to this conference last year. This talk outlines the technical challenges in harnessing this powerful implementation into a scalable, resilient service cluster at relatively low cost. The presented proof of concept solution uses the Scala language and the Akka framework to demonstrate how two primary abstractions – the Actor and the Stream – are suited to this task.

pdficon

AN IRISH SME’S VIEW OF BIG DATA ANALYTICS

Dave Clarke, Asystec

Dave Clarke joined Asystec in 2014 as Chief Data Scientist. Dave is leading the development of the new Asystec Big Data division, as well as the new multitechnology Executive Briefing Centre in Limerick. Dave has worked in the IT industry for 20 years in software consulting, management consulting, project, program, and engineering team management, solutions architecture, technology evangelist and data science roles. Dave has most recently been working with the EMEA start-up of Entercoms, a Dallas based supply chain analytics company. Previous to this, he spent 14 years with EMC. This included working with Pivotal/Greenplum, the cornerstone division of EMC’s Big Data Analytics drive, where he spent his time consulting with senior business executives 1-to-1 and in large forums across the EMEA region. Previous to Greenplum, Dave worked in EMC’s Solutions Group leading large multi-disciplinary teams developing EMC Proven Solutions in infrastructure management for Microsoft, Oracle and Greenplum Data Warehouse Appliance systems. Dave holds a Bachelor of Science degree in Applied Mathematics and Computing from the University of Limerick and a Masters of Science degree in Technology Management from University College Cork.

Abstract

The presentation will focus on giving a perspective on the relevance of Big Data Analytics to an Irish SME. With various assessments of where big data is on Gartner’s Innovation Hypecycle, Dave’s talk will review some of the key development tracks that big data has taken over the last 5 years. He will note some of the key challenges that he sees for Irish SME’s notably the changing big data technology landscape and the need to collaborate with new people both internal and external to the organisation. These new stakeholders have diverse backgrounds and needs from DNA gene sequencing to supply chain administration to call centre service optimisation.

pdficon

HPC IN INDUSTRY

Chair: Lee Margetts, Research Computing Services The University of Manchester

Lee Margetts is an expert in large-scale computational engineering. He has more than 15 years experience in HPC and started his career as a consultant in the UK National HPC Service, CSAR (1998-2008). Lee currently holds various posts at the University of Manchester, is a Visiting Research Fellow at the Oxford eResearch Centre, University of Oxford and an Affiliate Research Fellow at the Colorado School of Mines, USA. He leads the open source parallel finite element analysis project ParaFEM and is author of the accompanying text book, “Programming the Finite Element Method”. He is an investigator on the EU FP7 European Exascale Software Initiative and his ambition is for ParaFEM to be one of the first engineering applications with Exascale capability. Lee has a particular interest in HPC technology transfer between academia and industry, holding an MBA with distinction in International Engineering Business Management. He contributes to international activities through his roles as Chairman of the NAFEMS HPC Technical Working Group; elected member of the PRACE Industrial Advisory Committee and academic lead on EPSRC’s UK-USA HPC Network.

ON THE IMPACT OF AUTOMATIC PARALLELIZATION IN TECHNICAL COMPUTING FOR SCIENCE AND INDUSTRY

Manuel Arenaz, University of Coruña & CEO of Appentra Solutions

Manuel Arenaz is CEO at Appentra Solutions and professor at the University of A Coruña (Spain). He holds a PhD in Computer Science from the University of A Coruña (2003) on advanced compiler techniques for automatic parallelization of scientific codes. His specialty are compiler techniques for automatic extraction of parallelism and for automatic generation of parallel-equivalent code for a variety of multi/many-core computer systems. He has experience in the parallelization of a wide range of numerical methods using main parallel programming standards (e.g., MPI, OpenMP, OpenACC, vectorization/simdization). Recently, he cofounded Appentra Solutions to commercialize products and services that take advantage of the new Parallware technology. Parallware is a new source-to-source parallelizing compiler that automates the tedious, error-prone and timeconsuming process of parallelization of full-scale scientific codes.

Abstract

High Performance Computing is a key enabling technology to solve the big challenges of modern society and industry. The development of HPC programs is a complex, error-prone, tedious undertaking that requires a highly-skilled workforce well trained in HPC methods, techniques and tools. In the years to come, a large number of HPC experts are expected to retire. Thus, there is a growing urgency in mending the HPC talent gap, especially in market segments such as Oil&Gas and R+D/Government where HPC is competitive advantage. Parallelism is the primary source of performance gain in modern computing systems, and compiler technology is at the heart of many developer tools available in the HPC marketplace. Thus, automatic parallelization is a key approach to address the HPC talent gap as it decouples the development of HPC programs from the features and complexity of the underlying parallel hardware. Overall, parallelizing compilers enable experts to focus on computational science and engineering methods, techniques and tools, getting them rid of learning HPC methods, techniques and tools. Automatic parallelization is a long-lasting challenge for the HPC community. There have been many efforts world-wide in academia and industry to build parallelizing compilers that effectively convert sequential scientific programs into parallel-equivalents. Well-known examples are ICC, PGI, GCC, Polaris, SUIF, Pluto,… Recent advances in compiler technology to automatically extract parallelism from sequential scientific codes have solved the limitations of classical dependence analysis technology. A new hierarchical dependence analysis technology has been transferred from academia to industry by Appentra Solutions. The resulting product is Parallware, a new source-to-source parallelizing compiler for C programs that supports the OpenMP parallel programming standard. This talk will analyze the state-of-the-art in parallelizing compilers as well as their impact in modern science and industry from the point of view of performance, portability and productivity.

Manuel Arenaz could not be at PRACEdays15 in person, so here is a video of his presentation:

 

A SELF-MANAGED HPC CLOUD ECOSYSTEM FOR SIMULATION AND COLLABORATION

Nicolas Tonello, Director, Constelcom

Nicolas Tonello obtained a PhD in Aerospace Engineering, from the University of Michigan. Then he became multiphase flow and combustion projects leader for R&D and commercial editors in the USA and the UK, developing physical models and software for Computational Fluid Dynamics (CFD). In 2007 he became founder and Director of Renuda UK in London to provide consulting and software development CFD services in Europe. In 2013 he founded Constelcom Ltd, to realise a larger, global vision for all simulation applications and activities requirements, different software delivery models, remote collaboration, and High Performance Computing (HPC).As director of Constelcom Ltd, he is leading the development and delivery of ConstellationTM, a user-centric, web enabled, highly scalable platform with exceptional access and user experience to open up supercomputing and collaboration to all engineering, science and data processing communities. Our vision is to provide an all-encompassing, application agnostic environment for members to carry out all virtual engineering tasks, collaboratively and with seamless access to simulation software and HPC resources.

Abstract

Simulations, whether for virtual engineering, life sciences, or data processing and analysis in general are becoming essential to the creation of innovative products and scientific discoveries. Simulating ever larger and more complex problems to replace experiments or prototyping requires the kind of High Performance Computing (HPC) power traditionally only available in national research centres. However, whilst HPC capability is growing very fast, true supercomputing remains a specialist area often delegated to computing specialists as opposed to engineering and science discoverers, which restricts its uptake. In this talk, we will present some of the ideas and concepts which we have evolved over the last five years to address the challenges and common requirements for all end-users and simulation applications, in order to foster Highly Collaborative Computing (HCC) supported by HPC and encourage utilisation by a much wider community of non-specialists through ease of access, ease of use and self-management, and which have led to the deployment of the first instance of ConstellationTM on the Hartree Centre’s systems in the UK. The key elements and the need for close collaboration and interaction with supercomputing centres in order to develop this HPC Cloud ecosystem will be discussed, as well as plans for future developments and possibilities as wells as challenges to expand and create a pan-European connected community of users and resources.

ESTABLISHING HPC COMPUTATIONAL ENVIRONMENTS IN INDUSTRY: A VIEW FROM THE INSIDE

Stefano Cozzini, CEO eXact Lab

Stefano Cozzini has over 15 years experience in the area of scientific computing and HPC computational e-infrastructures. He is presently development scientist at CNR/IOM c/o Sissa in Trieste and CEO of exact lab srl, a company which he has cofounded in 2011 as a spin-off company of hist CNR/IOM institute. The company provides advanced computation services in the HPC arena and it is operating on wide range of services for several customers. He has considerable experience in leading HPC infrastructure projects at national and international level. He served as Scientific Consultant for the International Organization Unesco, from 2003 to 2012, and UNDP/UNOPS during 2011 and 2012. From 2014 he is also coordinator of the International Master in High Performance Computing promoted by Sissa and ICTP.

Abstract

Stefano Cozzini will present the experience of an innovative startup to provide HPC computational environment in industry and beyond. eXact lab srl, founded just three years ago with the aim to provide High performance Computing services is still in its start-up phase but it is gaining experience and learning how to promote and establish what we define as an HPC computational environment f outside a purely research and academic world. Our idea of a computational environment will be discussed and some successful case studies illustrated. He will then close discussing the challenges ahead of us.

pdficon

Keynotes & Panels

Thursday 28 May, 2015

INTERNATIONAL TECHNOLOGY INVESTMENT & HPC

Leo Clancy, Division Manager ITC, IAD Ireland

Leo heads IDA Ireland’s Technology, Consumer and Business Services. IDA’s role is to market Ireland to multi-nation investors and to support established investors in Ireland. Prior to joining IDA Leo worked in the telecommunications industry, spending 13 years with Ericsson in engineering and management roles. This was followed by more than four years leading the technology function for an Irish fibre communications company. Leo holds a degree in Electronics Engineering from Dublin Institute of Technology.

Abstract

Leo Clancy will focus in his presentation on global trends in international technology investment. His work at IDA (Industrial Development Authority) Ireland means that he has significant experience of this area. He will outline some of the areas where he sees HPC becoming more and more important in terms of these trends.

pdficon

ULTIMATE RAYLEIGH – BENARD AND TAYLOR-COUETTE TURBULENCE

Detlef Lohse, Faculty of Science and Technology University of Twente

Detlef Lohse got his PhD on the theory of turbulence in Marburg/Germany in 1992. As a postdoc in Chicago and later in Marburg and München he worked on single bubble sonoluminescence. In 1998 he was appointed as Chair of Physics of Fluids at the University of Twente, The Netherlands, where he still is. Lohse’s present research subjects are turbulence and multiphase flow, granular matter, and micro- and nanofluidics. Both experimental, theoretical, and numerical methods are used in his group. Lohse is Associate Editor of Journal of Fluid Mechanics and several other journals. He is Fellow of the American Physical Society, Division of Fluid Dynamics, and of IoP. He is also elected Member of the German Academy of Science (Leopoldina) and the Royal Dutch Academy of Science (KNAW). He received various prizes such as the Spinoza Prize (2005), the Simon Stevin Prize (2009), the Physica Prize (2011), the George K. Batchelor Prize for Fluid Dynamics (2012), and the AkzoNobel Prize (2012).

Abstract

Rayleigh-Benard flow – the flow in a box heated from below and cooled from above – and Taylor-Couette flow – the flow between two coaxial co- or counter-rotating cylinders – are the two paradigmatic systems in physics of fluids and many new concepts have been tested with them. They are mathematically well described, namely by the Navier-Stokes equations and the respective boundary conditions. While the low Reynolds number regime (i.e., weakly driven systems) has been very well explored in the ‘80s and ‘90s of the last century, in the fully turbulent regime major research activity only developed in the last decade. This was also possible thanks to the advancement of computational power and improved algorithms and nowadays numerical simulations of such systems can even be done in the so-called ultimate regime of turbulence, in which even the boundary layers become turbulent. In this talk we review this recent progress in our understanding of fully developed Rayleigh-Benard and Taylor-Couette turbulence, from the experimental, theoretical, and numerical point of view, focusing on the latter. We will explain the parameter dependences of the global transport properties of the flow and the local flow organisation, including velocity profiles and boundary layers, which are closely connected to the global properties. Next, we will discuss transitions between different (turbulent) flow states. This is joint work with many colleagues over the years, and I in particular would like to name Siegfried Grossmann, Roberto Verzicco, Richard Stevens, Erwin van der Poel, and Rodolfo Ostilla-Monico.

PANEL: SCIENCE AND INDUSTRY: PARTNERS FOR INNOVATION

Moderator: Tom Wilkie, Scientific Computing World

Tom Wilkie is Editor-In-Chief of Scientific Computing World. He is Chairman, and one of the founder shareholders, of its publishing company, Europa Science Ltd, responsible for commercial and strategic oversight of the six publications the company produces. With a background in mathematical physics and a PhD in the theory of elementary particle physics, he is a senior science writer and editor as well as company director. In the course of his career, he has been Features Editor of New Scientist and was Science Editor of The Independent newspaper for ten years, following the newspaper’s launch in 1986. Non-journalistic work has included a spell as an international civil servant for one of the specialised agencies of the UN system, and also time as Head of Bio-Medical Ethics at the Wellcome Trust. He is the author of three books on science and society.

Panelist: Sylvie Joussaume, INSU/CNRS

Sylvie Joussaume is a researcher within CNRS. She is an expert in climate modelling. She has been involved in IPCC assessment reports since the third report. Previously she was appointed as director of the Institut National des Sciences de l’Univers (INSU) from CNRS. She is chairing the scientific board of the European Network for Earth System modelling (ENES, http://enes.org) and coordinates the FP7 infrastructure project, IS-ENES, which integrates the European climate models in a common research infrastructure dealing with models, model data and high-performance computing for climate (http://is.enes.org) (2009-2017) and has published its infrastructure strategy for 2012-2022. She is chair of the PRACE Scientific Steering Committee (SSC) in 2015 and chairs the scientific committee of ORAP that promotes high performance computing in France since 2010.

Panelist: Anders Rhod Gregersen, Vestas, and Vice-Chair of the PRACE Industrial Advisory Committee

Anders Rhod Gregersen is responsible for the High Performance Computing & Big Data efforts at Vestas Wind Systems A/S. He designed and operates the Firestorm supercomputer, the third largest commercially used supercomputer in the world at the time of installation. Before Vestas, Anders successfully enabled the University supercomputers in the Nordic countries to analyse the vast data streams from the largest machine in the world, the large hadron collider at CERN, Geneva. Besides Vestas, Anders is the Vice chairman of the Industrial Advisory Committee at PRACE.

Panelist: Mateo Valero, Director Barcelona Supercomputing Center

Mateo Valero, is a professor in the Computer Architecture Department at UPC, in Barcelona. His research interests focuses on high performance architectures. He has published approximately 600 papers, has served in the organization of more than 300 International Conferences and he has given more than 400 invited talks. He is the director of the Barcelona Supercomputing Centre, the National Centre of Supercomputing in Spain. Dr. Valero has been honoured with several awards. Among them, the Eckert-Mauchly Award, Harry Goode Award, The ACM Distinguished Service award, the “King Jaime I” in research and two Spanish National Awards on Informatics and on Engineering. He has been named Honorary Doctor by the Universities of Chalmers, Belgrade and Veracruz in Mexico and by the Spanish Universities of Las Palmas de Gran Canaria, Zaragoza and Complutense in Madrid. “Hall of the Fame” member of the IST European Program (selected as one of the 25 most influential European researchers in IT during the period 1983-2008, in Lyon, November 2008) Professor Valero is Academic member of the Royal Spanish Academy of Engineering, of the Royal Spanish Academy of Doctors, of the Academia Europaea, and of the Academy of Sciences in Mexico, and Correspondant Academic of the Spanish Royal Academy of Science, He is a Fellow of the IEEE, Fellow of the ACM and an Intel Distinguished Research Fellow.

Panelist: Augusto Burgueño Arjona, European Commission

Augusto Burgueño Arjona is currently Head of Unit “eInfrastructure” at European Commission Directorate General for Communications Networks, Content and Technology. His unit coordinates the implementation of the European HPC strategy as well as the deployment of European research eInfrastructures such as Géant, PRACE, EUDAT, OpenAIRE and the European Grid Initiative (EGI). Previously he served as Head of Unit “Finance” Directorate General for Communications Networks, Content and Technology at European Commission and Head of inter-Directorate General Task Force IT Planning Office at European Commission.

Abstract

During the last three days speakers from industry and academia presented results that would have not been achievable without HPC. Will continuing on the proven and successful track be sufficient to meet the goals that Europe sets itself to be a leader in innovation and scientific excellence. The panelists will discuss the respective expectations of industry, science, infrastructure providers, and funding agencies on future HPC technologies and services. They will explore opportunities for synergies and mutual transfer of know-how between the stakeholders. Questions from the audience are welcome and should be submitted to the panel chair prior to the session or via the open microphone during the discussion.

Exascale Workshop

Tuesday 26 May, 2015

KEYNOTE: Exascale Needs & Challenges for Aeronautics Industry

Eric Chaput, Airbus, Flight-Physics Capability Strategy

Eric Chaput joined Airbus in 1992 after a Ph.D. in Energetics and Optimisation, Post-doctoral positions in Experimental and Numerical Simulation at University of Poitiers, and six years’ experience at Airbus Defence & Space working for ARIANE and HERMES programmes. He became subsequently CFD Research Manager, before managing Aerodynamics Methods and in 2004, Senior Manager of Flight-Physics Methods. He is currently the leader of Airbus Flight-Physics capability strategy and a Senior Expert in Aerodynamics Flow Simulation Methods. He has long experience and interest in HPC, driving within the HPC Steering Board the needs and investment for Airbus Engineering, and for more than 15 years member of the Management Board of CERFACS, a research organization.

Abstract

Exascale computing is seen as a key enabling technology for future aircraft design to be developed and optimised in a fully multidisciplinary way, making a wide use of design systems that provide integrated analysis and optimisation capabilities which allow for a realtime/interactive way of working. The move from RANS to unsteady Navier-Stokes simulation, (ranging from current RANS-LES to full LES) and/or Lattice Boltzmann method will significantly improve predictions of complex flow phenomena around full aircraft configurations with advanced physical modelling. For instance moving LES capability from Petascale to Exascale computing will accelerate the understanding of noise generation mechanisms and will enable the elaboration of flow control strategy for noise reduction. Multi-disciplinary analysis and design, and real time simulation of aircraft manoeuver, supported by affordable CFD-based aerodynamic and aero elastic data prediction will be a significant change of paradigm in aeronautics industry. The challenges faced by our industry at the horizon of 2025 will be presented together with the expectations on Exascale computing likely to bring operational benefits at that time.

pdficon

 

 

 

DEEP & DEEP-ER: Innovative Exascale Architectures in the Light of User Requirements

Estela Suarez, Jülich Supercomputing Centre; Marc Tchiboukdjian, CGG; Gabriel Staffelbach, CERFACS

Estela Suarez, Jülich Supercomputing Centre
Estela Suarez is the project manager for DEEP & DEEP-ER, two European funded Exascale research endeavors. She works at Juelich Supercomputing Center in Germany and holds a PhD in Physics from the university of Geneva. Already early on Estela has engaged intensively in scientific simulations and computing. Her passion for this research area made Estela follow a career in an HPC environment and enter the Exascale world.

Marc Tchiboukdjian, CGG
Marc Tchiboukdjian currently works as IT Architect for CGG, a fully integrated geoscience company providing leading geological, geophysical and reservoir capabilities to the oil and gas industry. He holds a PhD from the University of Grenoble and has been active in the field of Exascale research for the last four years. Within the DEEP project, Marc is working on mapping seismic imaging algorithms on the DEEP architecture and evaluating their performance.

Gabriel Staffelbach, CERFACS
Gabriel Staffelbach is a senior researcher at Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique (CERFACS). He has been working on numerical simulation of combustion and high performance computing since 2002 and is an active user of most HPC systems available to the scientific community via both the PRACE and INCITE programs.

Abstract

When developing new architectures for the Exascale era, the chicken-or-egg question arises of what to work on first: new hardware or new codes actually able to fully exploit Exascale systems. In the DEEP and DEEP-ER projects we tackle this challenge by adopting a comprehensive, holistic approach. We have come up with an innovative hardware concept, called the Cluster-Booster architecture. At the same time we develop the software stack and work very closely with our application partners to thoroughly integrate all three aspects. For our pilot applications, on the one hand we optimise their codes for our system, and on the other hand we are developing the system design based on the Exascale requirements that our users have. In this session we will explain our basic concept and share two of our use cases: Our industry partner CGG will talk about seismic imaging in the oil and gas industry, and our partner CERFACS on computational fluid dynamics. These two use cases will clearly demonstrate the potential the DEEP architecture offers at Exascale, not least for industrial users.

pdficon

 

 

 

Mont-Blanc: High Performance Computing from Commodity Embedded Technology

Filippo Mantovani, Barcelona Supercomputing Center

Filippo Mantovani is a postdoctoral research associate of the Heterogeneous Architectures group at the Barcelona Supercomputing Center. He graduated in Mathematics and holds a PhD in Computer Science from University of Ferrara in Italy. He has been a scientific associate at the DESY laboratory in Zeuthen, Germany, and at the University of Regensburg, Germany. He spent most of his scientific career in computational physics, computer architecture and high performance computing, contributing to the Janus, QPACE and QPACE2 projects. He joined BSC’s Mont-Blanc project in 2013, becoming recently technical coordinator of the project.

Abstract

In this session, the coordinator of the Mont-Blanc project will present an overview and status of this European project together with RR HPC Tech Lead Specialist-Aerothermal Methods at Rolls-Royce, a member of the Industrial End-User Group. He will present their observations from the process of testing the low-energy HPC prototypes produced by the project.

pdficon

 

 

 

CRESTA: Developing Software and Applications for Exascale Systems

Mark Parsons, EPCC, the University of Edinburgh

Mark Parsons joined EPCC, the supercomputing centre at The University of Edinburgh, in 1994 as a software developer working on several industrial contracts following a PhD in Particle Physics undertaken on the LEP accelerator at CERN in Geneva. In 1997 he became the Centre’s Commercial Manager and subsequently its Commercial Director. Today he is EPCC’s Executive Director (Research and Commercialisation) and also the Associate Dean for e-Research at Edinburgh. He has many interests in distributed computing ranging from its industrial use to the provision of pan-European HPC services through the PRACE Research Infrastructure. His research interests include highly distributed data intensive computing and novel hardware design.

Abstract

The CRESTA project was one of three complementary Exascale software projects funded by the European Commission. The recently completed project employed a novel approach to Exascale system co-design, which focused on the use of a small set of representative applications to inform and guide software and systemware developments. The methodology was designed to identify where problem areas exist in applications and to use that knowledge to consider different solutions to those problems, which inform software and hardware, advances. Using this approach, CRESTA has delivered on all of its outputs, producing a set of Exascale focused systemware and applications.

pdficon

 

 

 

EPiGRAM: Software in Support of Current and Future Space Missions

 

Stefano Markidis, KTH Royal Institute of Technology

Stefano Markidis is Assistant Professor in High Performance Computing at the KTH Royal Institute of Technology. He is a recipient of the 2005 R&D100 award and author of more than 50 articles in peer-reviewed articles. His research interests include large scale simulations for space physics applications.

Abstract

During the preparation of NASA and ESA space missions, several simulations of different scenarios in space are carried out on HPC systems. These large scale simulations allow scientists to plan the space missions and to investigate possible phenomena of interest. In this talk, we present the new software developed by the EPiGRAM project to increase the scalability of these codes, the performance of the I/O activities and the amount of useful data for analysis. The impact of the EPiGRAM software on the current NASA Magnetospheric Multiscale Mission (MMS) and on the proposed ESA THOR mission (http://thor.irfu.se/) is discussed.

pdficon

 

 

 

NUMEXAS: Embedded Methods for Industrial CFD Applications 

Riccardo Rossi, CIMNE - International Centre for Numerical Methods in Engineering

Riccardo Rossi, holds a PhD in Civil Engineering from the Technical University of Catalonia (UPC) and is Senior Researcher at CIMNE and tenure-track lecturer at UPC BarcelonaTech. He has extensive experience in the field of Computational Solid and Fluid Dynamics and in the solution of Fluid-Structure Interaction problems, using both body fitted and embedded approaches. He is one of the authors of the multiphysics code KRATOS and author of 36 JCR papers and some 50 conference presentations in the field, including a plenary lecture. Dr. Rossi is also a member of the executive committee of SEMNI and has contributed to the organization of ECCOMAS conferences.

Abstract

A problem of paramount importance in the simulations of real engineering problems is the construction of a suitable discretization. It is widely acknowledged that the meshing step required to obtain a suitable geometry may take 90% of the time needed to obtain an engineering result. The objective of our work is to develop a technology to embed “dirty” geometries within a background mesh, which is then adapted to fit the requirements of the simulation. The technique employed results in a methodology, which is both robust and scalable on modern HPC hardware.

pdficon

 

 

 

EXA2CT: Mining Chemical Space Annotation to tackle the Phenotypic Challenge of Pharma Industry

Hugo Ceulemans, Janssen

Hugo holds an M.D., an M.Sc. in Bioinformatics and a Ph.D. in Molecular Biology from the University of Leuven, and did postdoctoral fellowships in molecular and computational phosphatase biology at the University of Leuven and in structural bioinformatics at the EMBL in Heidelberg. He joined Janssen in 2008 as a computational biologist supporting the Infectious Diseases and Vaccines franchise with models that predict the clinical efficacy of multi-drug regimens in HIV patients given viral sequences. Over the years, his responsibilities extended to cover additional computational approaches and all disease franchises in Janssen. Three years ago, these activities were consolidated in a new Computational Systems Biology unit, which now offers the analysis and integration of sets of chemical, biochemical, omics, phenotypic and clinical data and the formalization of drug discovery knowledge in predictive quantitative models. Mining the extensive, but heterogeneous annotation of the various biological effects of millions of chemicals is one of the major activities of the unit.

Abstract

The trajectory from a biological concept to a drug available to patients is expensive and typically spans over a decade. Drug discovery starts by mapping a disease mapped to a scalable experiment in a test tube. This enables the screening libraries of chemicals for hits or active compounds, from which chemical starting points or leads are selected. These leads are then taken through a cascade of follow-up assays and animal models to optimize their potency on the intended protein targets implicated in disease, while controlling their activity on undesired targets associated with side effects. Finally, the compound is transferred to drug development, where the candidate drugs are tested in human subjects in three subsequent clinical phases. Still, the vast majority of candidates that enter drug development do not make it through to approval. One current trend to mitigate the high attrition rate is to do the initial screening in more complex, so-called phenotypic assays, which are believed to emulate the disease much better than biochemical assays, and that do not rely on the limiting knowledge of which targets are critical for effect. The phenotypic approach, however, presents challenges of their own: their throughput is lower, implying a need for more compact libraries. Secondly, many of the existing compound optimization processes require knowledge of the target. Both of these challenges can be addressed by improving the industries capabilities to predict the activities of chemical on not just the intended protein target, but on as many proteins and processes as possible. To this end, we propose scaled-up machine learning approaches that can mine extensive but heterogeneous information on biological activities of chemicals that is accessible to the industry, to learn to predict it comprehensively. Moreover, we believe computational approaches enable us to extract much more relevant primary information for these exercises from industry standard screens; for instance, by more extensive image analysis, feature selection and machine learning microscopy based screens. Finally, progress is being made in not only formulating predictions, but also quantifying the reliability of predictions, not just for each model for a certain target, but even for individual prediction of a given chemical at a given concentration on a given target.

pdficon

 

 

 

PANEL Discussion led by HPC Advisory Council Chairman

Gilad Shainer, HPC Advisory Council Chairman

Gilad Shainer is an HPC evangelist who focuses on high-performance computing, high-speed interconnects, leading-edge technologies and performance characterizations. Mr. Shainer holds an M.Sc. degree (2001, Cum Laude) and a B.Sc. degree (1998, Cum Laude) in Electrical Engineering from the Technion Institute of Technology in Israel. He also holds patents in the field of high-speed networking.

Women in HPC Workshop

Monday 25 May – Tuesday 26 May, 2015

WOMEN IN HPC: A HANDS-ON INTRODUCTION TO HPC

Toni Collis, EPCC

Toni Collis joined EPCC in 2011 as an Applications Developer and later as an Applications Consultant after completing a PhD in Molecular Simulation at the University of Edinburgh. Toni fell in love with HPC during my PhD as she studied EPCC’s MSc in High Performance Computing(link is external) part time. She realised the best thing about her PhD was coding and helping her fellow scientists write better software, so a job in EPCC where she spend her time writing HPC code for scientists was perfect! Her work includes teaching Parallel Numerical Algorithms to postgraduate students, as well as being involved in the ARCHER HPC training programme. Her project work has focused on a variety of topics form optimising Molecular Dynamics software, using solvers to improve current HPC codes, as well as introducing new techniques to port software to new architectures such as GPUs and helping scientists simulate anything from future Nuclear Fusion reactors to understanding the antimicrobial nature of designer molecules in cell membranes. In addition Toni is the Equality and Diversity Coordinator for the School of Physics and Astronomy at the University of Edinburgh and in 2013 she set up the Women in HPC initiative which aims to address the underrepresentation of women in HPC community.

Weronika Fillinger, EPCC

Weronika Fillinger joined EPCC in 2013 as an Application developer right after finishing EPCC’s MSc in High Performance Computing. Weronika became interested in HPC when she was working on my MPhys master project, which was simulating Bootstrap Percolation on Complex Networks. The code Weronika wrote at that time required days to run and she realised that writing the code is only the first step in doing science via computer simulations. After that she completed the EPCC MSc in HPC and this time the subject of her dissertation was Optimising PLINK (a whole genome analysis toolset), which involved both parallelisation and serial optimisations of the code. Working at EPCC, she have been involved in a variety of different collaboration projects centred on high performance computing including the European collaborative projects CRESTA and APES. Weronika is also involved in HPC training developing EPCC’s online distance learning courses.

Abstract

In collaboration with the PRACE Advanced Training Centres (PATC), the UK National Supercomputing Facility, ARCHER, and the PRACE Scientific and Industrial Conference 2015 (PRACEDays15) we will be running a 1.5 day ‘Hands on Introduction to HPC’ training session. This course provides a general introduction to High
Performance Computing (HPC) using the UK national HPC service, ARCHER, as the platform for exercises. Familiarity with desktop computers is presumed but no programming or HPC experience is required. Programmers can however gain extra benefit from the course as source code for all the practicals will be provided. This event is open to everyone interested in using HPC, but all our training staff will be women and we hope that this provides an opportunity for women to network and build collaborations as well as learning new skills for a challenging and rewarding career in HPC.

EESI2 Final Conference

Thursday 28 May, 2015

The presentations of the EESI2 Final Conference can be found here.