• PRACE Training Centres (PTCs)
  • !!
  • PRACE operates ten PRACE Training Centres (PTCs) and they have established a state-of-the-art curriculum for training in HPC and scientific computing. PTCs carry out and coordinate training and education activities that enable both European academic researchers and European industry to utilise the computational infrastructure available through PRACE and provide top-class education and training opportunities for computational scientists in Europe.
    With approximately 100 training events each year, the ten PRACE Training Centres (PTCs) are based at:

    PTC training events are advertised on the following pages. Registration is free and open to all (pending availability):
    https://events.prace-ri.eu/category/2/

    The following figure depicts the location of the PTC centers throughout Europe.
    PATC, PTC location

    PATC events this month:

    October 2018
    Mon Tue Wed Thu Fri Sat Sun
    1
     
    2
     
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Sisu and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Taito. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PTC course "Advanced Parallel Programming" in 2019.

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Agenda

    Day 1, Wednesday 3.10

       09.00 – 10.30    What is parallel computing?
       10.30 – 10.45    Coffee break
       10.45 – 11.30    OpenMP basic concepts
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.30    Work-sharing constructs
       13.30 – 14.00    Exercises
       14.00 – 14.30    Execution control, library functions
       14.30 – 14.45    Coffee break
       14.45 – 15.30    Exercises
       15.30 – 15.45    Wrap-up and further topics
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 2, Thursday 4.10

       09.00 – 09.40    Introduction to MPI
       09.40 – 10.00    Exercises
       10.00 – 10.30    Point-to-point communication
       10.30 – 10.45    Coffee break
       10.45 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Collective operations
       13.45 – 14.30    Exercises
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 3, Friday 5.10

       09.00 – 09.30    User-defined communicators
       09.30 – 10.30    Exercises
       10.30 – 10.45    Coffee break
       10.45 – 11.30    Non-blocking communication
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Exercises
       13.45 – 14.30    User-defined datatypes
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Lecturers: 

    Juhani Kataja (CSC), Martti Louhivuori (CSC)

    Language:   EnglishPrice:          Free of charge

    events.prace-ri.eu/event/775/
    Oct 3 8:00 to Oct 5 15:00
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Sisu and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Taito. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PTC course "Advanced Parallel Programming" in 2019.

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Agenda

    Day 1, Wednesday 3.10

       09.00 – 10.30    What is parallel computing?
       10.30 – 10.45    Coffee break
       10.45 – 11.30    OpenMP basic concepts
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.30    Work-sharing constructs
       13.30 – 14.00    Exercises
       14.00 – 14.30    Execution control, library functions
       14.30 – 14.45    Coffee break
       14.45 – 15.30    Exercises
       15.30 – 15.45    Wrap-up and further topics
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 2, Thursday 4.10

       09.00 – 09.40    Introduction to MPI
       09.40 – 10.00    Exercises
       10.00 – 10.30    Point-to-point communication
       10.30 – 10.45    Coffee break
       10.45 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Collective operations
       13.45 – 14.30    Exercises
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 3, Friday 5.10

       09.00 – 09.30    User-defined communicators
       09.30 – 10.30    Exercises
       10.30 – 10.45    Coffee break
       10.45 – 11.30    Non-blocking communication
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Exercises
       13.45 – 14.30    User-defined datatypes
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Lecturers: 

    Juhani Kataja (CSC), Martti Louhivuori (CSC)

    Language:   EnglishPrice:          Free of charge

    events.prace-ri.eu/event/775/
    Oct 3 8:00 to Oct 5 15:00
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Sisu and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Taito. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PTC course "Advanced Parallel Programming" in 2019.

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Agenda

    Day 1, Wednesday 3.10

       09.00 – 10.30    What is parallel computing?
       10.30 – 10.45    Coffee break
       10.45 – 11.30    OpenMP basic concepts
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.30    Work-sharing constructs
       13.30 – 14.00    Exercises
       14.00 – 14.30    Execution control, library functions
       14.30 – 14.45    Coffee break
       14.45 – 15.30    Exercises
       15.30 – 15.45    Wrap-up and further topics
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 2, Thursday 4.10

       09.00 – 09.40    Introduction to MPI
       09.40 – 10.00    Exercises
       10.00 – 10.30    Point-to-point communication
       10.30 – 10.45    Coffee break
       10.45 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Collective operations
       13.45 – 14.30    Exercises
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 3, Friday 5.10

       09.00 – 09.30    User-defined communicators
       09.30 – 10.30    Exercises
       10.30 – 10.45    Coffee break
       10.45 – 11.30    Non-blocking communication
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Exercises
       13.45 – 14.30    User-defined datatypes
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Lecturers: 

    Juhani Kataja (CSC), Martti Louhivuori (CSC)

    Language:   EnglishPrice:          Free of charge

    events.prace-ri.eu/event/775/
    Oct 3 8:00 to Oct 5 15:00
    6
     
    7
     
    8
     
    9
     
    The aim of this course is to give users the best practices to improve their use of the newly installed Prace Irene Joliot-Curie system and to give hints to prepare their codes for future architectures.

    Topics

    Introduction:CEA/TGCC, Irene Joliot Curie supercomputer [CEA]
    Technology: architecures, KNL/Skylake, IB/BXI  [ATOS/Bull]
    MPI Software: OpenMPI, portals, infiniBand, WI4MPI & nbsp;[EOLEN/AS+]
    User environment: module, collections, flavor/features,toolchains, Hands'on   [EOLEN/AS+]
    Vectorisation: openMP4, simd directives, tools, optimisation [EOLEN/AS+]
    Virtualisation: Pcocc, checkpoint, templates, Hands'on[CEA  / EOLEN ]
    I/O: Posix, StdC, MPI-io, hdf5, Hands'on   [EOLEN/AS+]
    Prerequisites

    Experience with code developpement, knowledge of C or F90, MPI, OpenMP

     

    events.prace-ri.eu/event/740/
    Oct 10 9:00 to Oct 12 17:00
    The aim of this course is to give users the best practices to improve their use of the newly installed Prace Irene Joliot-Curie system and to give hints to prepare their codes for future architectures.

    Topics

    Introduction:CEA/TGCC, Irene Joliot Curie supercomputer [CEA]
    Technology: architecures, KNL/Skylake, IB/BXI  [ATOS/Bull]
    MPI Software: OpenMPI, portals, infiniBand, WI4MPI & nbsp;[EOLEN/AS+]
    User environment: module, collections, flavor/features,toolchains, Hands'on   [EOLEN/AS+]
    Vectorisation: openMP4, simd directives, tools, optimisation [EOLEN/AS+]
    Virtualisation: Pcocc, checkpoint, templates, Hands'on[CEA  / EOLEN ]
    I/O: Posix, StdC, MPI-io, hdf5, Hands'on   [EOLEN/AS+]
    Prerequisites

    Experience with code developpement, knowledge of C or F90, MPI, OpenMP

     

    events.prace-ri.eu/event/740/
    Oct 10 9:00 to Oct 12 17:00
    This workshop is an on-demand training event of PRACE-5IP on energy efficiency tools.

    In the context of the PRACE-3IP Pre-Commercial Procurement (PCP) project on on “Whole System Design for Energy Efficient HPC”, Atos/Bull (along with E4 - Italia and Maxeller - UK) developed a set of tool for energy monitoring and management, ported a set of scientific computing applications to its KNL architecture and deployed a pilot system at GENCI/CINES Computing Centre, Montpellier.

    The workshop  will take place at CINES Montpellier, from Thursday 11 October noon to Friday 12 October. There will technical presentation on Thursday afternoon, and a hands on workshop / hackathon on Friday. Participants can choose to attend part or all of the workshop.

    The following benchmark code where ported by Atos/Bull:

    BQCD;
    NEMO;
    Quantum Espresso;
    SPECFEM3D.
    Along with the full Accelerated benchmark suit UEABS by PRACE-4IP work-package 7 on Code Enabling.

    Developers of numerical applications with general interest in energy efficiency tools are invited to participate in this training event. No previous background in energy efficiency tools is required.

    Lecturers will be leading experts from Atos/Bull, CINES and PRACE users of these tools. All participants will be provided with access to the Atos/Bull KNL pilot system, and could run energy profiling of their codes (code should be provided before the workshop, for analysis and security issues).

    The training will start on Thursday 11 October, 1pm and end at Friday 12 October 2018 at 4pm.

    events.prace-ri.eu/event/782/
    Oct 11 13:00 to Oct 12 16:00
    The aim of this course is to give users the best practices to improve their use of the newly installed Prace Irene Joliot-Curie system and to give hints to prepare their codes for future architectures.

    Topics

    Introduction:CEA/TGCC, Irene Joliot Curie supercomputer [CEA]
    Technology: architecures, KNL/Skylake, IB/BXI  [ATOS/Bull]
    MPI Software: OpenMPI, portals, infiniBand, WI4MPI & nbsp;[EOLEN/AS+]
    User environment: module, collections, flavor/features,toolchains, Hands'on   [EOLEN/AS+]
    Vectorisation: openMP4, simd directives, tools, optimisation [EOLEN/AS+]
    Virtualisation: Pcocc, checkpoint, templates, Hands'on[CEA  / EOLEN ]
    I/O: Posix, StdC, MPI-io, hdf5, Hands'on   [EOLEN/AS+]
    Prerequisites

    Experience with code developpement, knowledge of C or F90, MPI, OpenMP

     

    events.prace-ri.eu/event/740/
    Oct 10 9:00 to Oct 12 17:00
    This workshop is an on-demand training event of PRACE-5IP on energy efficiency tools.

    In the context of the PRACE-3IP Pre-Commercial Procurement (PCP) project on on “Whole System Design for Energy Efficient HPC”, Atos/Bull (along with E4 - Italia and Maxeller - UK) developed a set of tool for energy monitoring and management, ported a set of scientific computing applications to its KNL architecture and deployed a pilot system at GENCI/CINES Computing Centre, Montpellier.

    The workshop  will take place at CINES Montpellier, from Thursday 11 October noon to Friday 12 October. There will technical presentation on Thursday afternoon, and a hands on workshop / hackathon on Friday. Participants can choose to attend part or all of the workshop.

    The following benchmark code where ported by Atos/Bull:

    BQCD;
    NEMO;
    Quantum Espresso;
    SPECFEM3D.
    Along with the full Accelerated benchmark suit UEABS by PRACE-4IP work-package 7 on Code Enabling.

    Developers of numerical applications with general interest in energy efficiency tools are invited to participate in this training event. No previous background in energy efficiency tools is required.

    Lecturers will be leading experts from Atos/Bull, CINES and PRACE users of these tools. All participants will be provided with access to the Atos/Bull KNL pilot system, and could run energy profiling of their codes (code should be provided before the workshop, for analysis and security issues).

    The training will start on Thursday 11 October, 1pm and end at Friday 12 October 2018 at 4pm.

    events.prace-ri.eu/event/782/
    Oct 11 13:00 to Oct 12 16:00
    13
     
    14
     
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/753/
    Oct 15 8:15 to Oct 19 16:30
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/754/
    Oct 15 8:30 to Oct 19 16:30
    This workshop is organized by VI-HPS for the French PRACE Advanced Training Centre, and will be hosted by the ROMEO Regional Computing Center in Reims : its aim is to give an overview of the VI-HPS programming tools suite explain the functionality of individual tools, and how to use them effectively offer hands-on experience and expert assistance using the tools

    Presentations and hands-on sessions are on the following topics:

    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca performance analysis toolset
    Vampir trace analysis toolset
    TAU performance system
    MAQAO performance analysis and optimization toolset Measurement & analysis of heterogeneous
    HPC systems using accelerators
    The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments.

    There is no fee for participation, however, participants are responsible for their own travel and accommodation.

    Classroom capacity is limited, therefore priority will be given to applicants with MPI, OpenMP and hybrid OpenMP+MPI parallel codes already running on the workshop computer systems, and those bringing codes from similar systems to work on.

    See agenda and further details at http://www.vi-hps.org/training/tws/tw29.html or https://romeo.univ-reims.fr/VIHPS

    events.prace-ri.eu/event/779/
    Oct 15 9:00 to Oct 19 17:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

    events.prace-ri.eu/event/756/
    Oct 15 10:00 to Oct 19 17:00
    This workshop is organized by VI-HPS for the French PRACE Advanced Training Centre, and will be hosted by the ROMEO Regional Computing Center in Reims : its aim is to give an overview of the VI-HPS programming tools suite explain the functionality of individual tools, and how to use them effectively offer hands-on experience and expert assistance using the tools

    Presentations and hands-on sessions are on the following topics:

    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca performance analysis toolset
    Vampir trace analysis toolset
    TAU performance system
    MAQAO performance analysis and optimization toolset Measurement & analysis of heterogeneous
    HPC systems using accelerators
    The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments.

    There is no fee for participation, however, participants are responsible for their own travel and accommodation.

    Classroom capacity is limited, therefore priority will be given to applicants with MPI, OpenMP and hybrid OpenMP+MPI parallel codes already running on the workshop computer systems, and those bringing codes from similar systems to work on.

    See agenda and further details at http://www.vi-hps.org/training/tws/tw29.html or https://romeo.univ-reims.fr/VIHPS

    events.prace-ri.eu/event/779/
    Oct 15 9:00 to Oct 19 17:00
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/754/
    Oct 15 8:30 to Oct 19 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/753/
    Oct 15 8:15 to Oct 19 16:30
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

    events.prace-ri.eu/event/756/
    Oct 15 10:00 to Oct 19 17:00
    This workshop is organized by VI-HPS for the French PRACE Advanced Training Centre, and will be hosted by the ROMEO Regional Computing Center in Reims : its aim is to give an overview of the VI-HPS programming tools suite explain the functionality of individual tools, and how to use them effectively offer hands-on experience and expert assistance using the tools

    Presentations and hands-on sessions are on the following topics:

    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca performance analysis toolset
    Vampir trace analysis toolset
    TAU performance system
    MAQAO performance analysis and optimization toolset Measurement & analysis of heterogeneous
    HPC systems using accelerators
    The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments.

    There is no fee for participation, however, participants are responsible for their own travel and accommodation.

    Classroom capacity is limited, therefore priority will be given to applicants with MPI, OpenMP and hybrid OpenMP+MPI parallel codes already running on the workshop computer systems, and those bringing codes from similar systems to work on.

    See agenda and further details at http://www.vi-hps.org/training/tws/tw29.html or https://romeo.univ-reims.fr/VIHPS

    events.prace-ri.eu/event/779/
    Oct 15 9:00 to Oct 19 17:00
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/754/
    Oct 15 8:30 to Oct 19 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/753/
    Oct 15 8:15 to Oct 19 16:30
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

    events.prace-ri.eu/event/756/
    Oct 15 10:00 to Oct 19 17:00
    This workshop is organized by VI-HPS for the French PRACE Advanced Training Centre, and will be hosted by the ROMEO Regional Computing Center in Reims : its aim is to give an overview of the VI-HPS programming tools suite explain the functionality of individual tools, and how to use them effectively offer hands-on experience and expert assistance using the tools

    Presentations and hands-on sessions are on the following topics:

    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca performance analysis toolset
    Vampir trace analysis toolset
    TAU performance system
    MAQAO performance analysis and optimization toolset Measurement & analysis of heterogeneous
    HPC systems using accelerators
    The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments.

    There is no fee for participation, however, participants are responsible for their own travel and accommodation.

    Classroom capacity is limited, therefore priority will be given to applicants with MPI, OpenMP and hybrid OpenMP+MPI parallel codes already running on the workshop computer systems, and those bringing codes from similar systems to work on.

    See agenda and further details at http://www.vi-hps.org/training/tws/tw29.html or https://romeo.univ-reims.fr/VIHPS

    events.prace-ri.eu/event/779/
    Oct 15 9:00 to Oct 19 17:00
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/754/
    Oct 15 8:30 to Oct 19 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/753/
    Oct 15 8:15 to Oct 19 16:30
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

    events.prace-ri.eu/event/756/
    Oct 15 10:00 to Oct 19 17:00
    This workshop is organized by VI-HPS for the French PRACE Advanced Training Centre, and will be hosted by the ROMEO Regional Computing Center in Reims : its aim is to give an overview of the VI-HPS programming tools suite explain the functionality of individual tools, and how to use them effectively offer hands-on experience and expert assistance using the tools

    Presentations and hands-on sessions are on the following topics:

    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca performance analysis toolset
    Vampir trace analysis toolset
    TAU performance system
    MAQAO performance analysis and optimization toolset Measurement & analysis of heterogeneous
    HPC systems using accelerators
    The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments.

    There is no fee for participation, however, participants are responsible for their own travel and accommodation.

    Classroom capacity is limited, therefore priority will be given to applicants with MPI, OpenMP and hybrid OpenMP+MPI parallel codes already running on the workshop computer systems, and those bringing codes from similar systems to work on.

    See agenda and further details at http://www.vi-hps.org/training/tws/tw29.html or https://romeo.univ-reims.fr/VIHPS

    events.prace-ri.eu/event/779/
    Oct 15 9:00 to Oct 19 17:00
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/754/
    Oct 15 8:30 to Oct 19 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    events.prace-ri.eu/event/753/
    Oct 15 8:15 to Oct 19 16:30
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

    events.prace-ri.eu/event/756/
    Oct 15 10:00 to Oct 19 17:00
    20
     
    21
     
    22
     
    23
     
    Annotation

    This HPC training continues to expand your skills and knowledge in using productivity tools and technologies, to be able to quickly build up an efficient HPC user environment, from scratch, without admin rights. We shall demonstrate tools and procedures tailored to the Salomon and Anselm clusters, however they are easily replicable on any HPC system. The topics include:


    GIT - a version control system for coordinating work among multiple developers

    GIT (the stupid content tracker) is the world's most used Version Control System. Originally designed for development of Linux kernel, it has evolved to universal tool for managing changes in code, configuration and documents, especially if those changes are not done by a single person. We will help you understand how GIT works internally and introduce you to basic GIT commands, that should cover 99% of daily GIT usage.


    KVM - a virtualization infrastructure for the Linux kernel that converts it into a hypervisor

    KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It provides the ability to run complex software stacks, including MS Windows, on powerful, linux-based supercomputer nodes with very low overhead costs. Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. In this lesson we will introduce the QEMU machine emulator using KVM virtualization, learn how to set up the QEMU/KVM on the  Salomon supercomputer, create a QEMU system image, install the operating system into the image, set up the networking and data access  and execute the calculations via the QEMU/KVM. We will also discuss visualized Ethernet networks in the context of QEMU and VDE2.


    Docker & Singularity - technologies for paravirtualization

    Docker for HPC? Yes, Singularity! Docker provides the ability to package and run an application in a loosely isolated environment called a container. The application with all settings and libraries can be packed together and then run on any other computer. It is perfect for developers, hosting providers, server farms, you name it! But, is it so good for supercomputers too? Let's look at why there are some problems with using Docker in shared environment and why Singularity is different. Singularity is developed with HPC in mind and is directly intended for use on HPC clusters with the direct support of technology such as OpenMPI. We will show you how to use Singularity, convert Docker images, create new containers and everything else you need to know about running a Singularity container in an HPC environment.


    EasyBuild - install and build custom programs

    In the supercomputer environment, pre-configured modules of various programs can be used. But how do I get my own program when I am a developer, or I need a program that is not installed? What if I need a different version of the program than the available modules? There is the possibility of sending a request for installation to the support, but what if I need it immediately? There are more options to manually build a program, but not everyone can do it because the user cannot install on the system but only in his own storage, and you need to know the compiler switches and system settings. You can also build a program using the tools that compile for us and create environment setting modules for that. In the lecture, we shall demonstrate how to manually compile a program, how to modify the environment so that the program can be used. We shall show you the EasyBuild tool to build a program and introduce new releases, such as creating Docker or Singularity rules and installing programs using EasyBuild directly into the Singularity image.

    Purpose of the course

    The participants will broaden their range of techniques for efficient use of HPC by mastering modern technologies for code management and execution.

    About the tutors

    The tutors are core members of the Supercomputing Services division of IT4Innovations.

    Branislav Jansík obtained his PhD in computational chemistry at the Royal Institute of Technology, Sweden in 2004. He took a postdoctoral position at IPCF, Consiglio Niazionale delle Ricerche, Italy,  to carry on development and applications of high performance computational methods for molecular optical properties. From 2006 he worked on  the development of highly parallel optimization methods in the domain of electronic structure theory at Aarhus University, Denmark. In 2012 he joined IT4Innovations, the Czech national supercomputing center, as the head of Supercomputing Services. He has published over 35 papers and co-authored the DALTON electronic structure theory code.

    Josef Hrabal obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2014. Since then he has contributed to projects within the University, and in 2017 he joined IT4Innovations as an HPC application specialist.

    David Hrbáč obtained his Master's Degree in Measurement and Control Engineering at VŠB - Technical University of Ostrava in 1997. Since 1994 he has worked for many IT companies as a system architect and CIO. In 2013 he joined IT4Innovations.

    Lukáš Krupčík obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2017. In 2016 he joined IT4Innovations as an HPC application specialist.

    Lubomír Prda obtained his Master's Degree in Information and Communication Technologies at VŠB - Technical University of Ostrava in 2010. Before joining the IT4Innovations team as an HPC specialist in 2016, he worked at the Tieto Corporation as a network engineer, and later moved to identity and access management for the company's nordic and international customers. Lubomír's focus is to manage and maintain the centre's back-end IT infrastructure and services.

    Roman Slíva obtained his Master's Degree in Computer Science at VŠB - Technical University of Ostrava in 1998. He worked as an IT system specialist and project leader in the areas of servers, high performance computing, and storage and backup. From 2007-2009 he led the IT Server Infrastructure group at VŠB - Technical University of Ostrava. In 2011 he joined IT4Innovations as an HPC system specialist and architect.

    events.prace-ri.eu/event/780/
    Oct 24 9:30 to Oct 25 17:00
    Annotation

    This HPC training continues to expand your skills and knowledge in using productivity tools and technologies, to be able to quickly build up an efficient HPC user environment, from scratch, without admin rights. We shall demonstrate tools and procedures tailored to the Salomon and Anselm clusters, however they are easily replicable on any HPC system. The topics include:


    GIT - a version control system for coordinating work among multiple developers

    GIT (the stupid content tracker) is the world's most used Version Control System. Originally designed for development of Linux kernel, it has evolved to universal tool for managing changes in code, configuration and documents, especially if those changes are not done by a single person. We will help you understand how GIT works internally and introduce you to basic GIT commands, that should cover 99% of daily GIT usage.


    KVM - a virtualization infrastructure for the Linux kernel that converts it into a hypervisor

    KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It provides the ability to run complex software stacks, including MS Windows, on powerful, linux-based supercomputer nodes with very low overhead costs. Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. In this lesson we will introduce the QEMU machine emulator using KVM virtualization, learn how to set up the QEMU/KVM on the  Salomon supercomputer, create a QEMU system image, install the operating system into the image, set up the networking and data access  and execute the calculations via the QEMU/KVM. We will also discuss visualized Ethernet networks in the context of QEMU and VDE2.


    Docker & Singularity - technologies for paravirtualization

    Docker for HPC? Yes, Singularity! Docker provides the ability to package and run an application in a loosely isolated environment called a container. The application with all settings and libraries can be packed together and then run on any other computer. It is perfect for developers, hosting providers, server farms, you name it! But, is it so good for supercomputers too? Let's look at why there are some problems with using Docker in shared environment and why Singularity is different. Singularity is developed with HPC in mind and is directly intended for use on HPC clusters with the direct support of technology such as OpenMPI. We will show you how to use Singularity, convert Docker images, create new containers and everything else you need to know about running a Singularity container in an HPC environment.


    EasyBuild - install and build custom programs

    In the supercomputer environment, pre-configured modules of various programs can be used. But how do I get my own program when I am a developer, or I need a program that is not installed? What if I need a different version of the program than the available modules? There is the possibility of sending a request for installation to the support, but what if I need it immediately? There are more options to manually build a program, but not everyone can do it because the user cannot install on the system but only in his own storage, and you need to know the compiler switches and system settings. You can also build a program using the tools that compile for us and create environment setting modules for that. In the lecture, we shall demonstrate how to manually compile a program, how to modify the environment so that the program can be used. We shall show you the EasyBuild tool to build a program and introduce new releases, such as creating Docker or Singularity rules and installing programs using EasyBuild directly into the Singularity image.

    Purpose of the course

    The participants will broaden their range of techniques for efficient use of HPC by mastering modern technologies for code management and execution.

    About the tutors

    The tutors are core members of the Supercomputing Services division of IT4Innovations.

    Branislav Jansík obtained his PhD in computational chemistry at the Royal Institute of Technology, Sweden in 2004. He took a postdoctoral position at IPCF, Consiglio Niazionale delle Ricerche, Italy,  to carry on development and applications of high performance computational methods for molecular optical properties. From 2006 he worked on  the development of highly parallel optimization methods in the domain of electronic structure theory at Aarhus University, Denmark. In 2012 he joined IT4Innovations, the Czech national supercomputing center, as the head of Supercomputing Services. He has published over 35 papers and co-authored the DALTON electronic structure theory code.

    Josef Hrabal obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2014. Since then he has contributed to projects within the University, and in 2017 he joined IT4Innovations as an HPC application specialist.

    David Hrbáč obtained his Master's Degree in Measurement and Control Engineering at VŠB - Technical University of Ostrava in 1997. Since 1994 he has worked for many IT companies as a system architect and CIO. In 2013 he joined IT4Innovations.

    Lukáš Krupčík obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2017. In 2016 he joined IT4Innovations as an HPC application specialist.

    Lubomír Prda obtained his Master's Degree in Information and Communication Technologies at VŠB - Technical University of Ostrava in 2010. Before joining the IT4Innovations team as an HPC specialist in 2016, he worked at the Tieto Corporation as a network engineer, and later moved to identity and access management for the company's nordic and international customers. Lubomír's focus is to manage and maintain the centre's back-end IT infrastructure and services.

    Roman Slíva obtained his Master's Degree in Computer Science at VŠB - Technical University of Ostrava in 1998. He worked as an IT system specialist and project leader in the areas of servers, high performance computing, and storage and backup. From 2007-2009 he led the IT Server Infrastructure group at VŠB - Technical University of Ostrava. In 2011 he joined IT4Innovations as an HPC system specialist and architect.

    events.prace-ri.eu/event/780/
    Oct 24 9:30 to Oct 25 17:00
    26
     
    27
     
    28
     
    29
     
    30
     
    31
     
     

     


    PTC events this month:

    October 2018
    Mon Tue Wed Thu Fri Sat Sun
    1
     
    2
     
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    9
     
    10
     
    11
     
    12
     
    13
     
    14
     
    15
     
    16
     
    17
     
    18
     
    19
     
    20
     
    21
     
    22
     
    23
     
    Annotation

    This HPC training continues to expand your skills and knowledge in using productivity tools and technologies, to be able to quickly build up an efficient HPC user environment, from scratch, without admin rights. We shall demonstrate tools and procedures tailored to the Salomon and Anselm clusters, however they are easily replicable on any HPC system. The topics include:


    GIT - a version control system for coordinating work among multiple developers

    GIT (the stupid content tracker) is the world's most used Version Control System. Originally designed for development of Linux kernel, it has evolved to universal tool for managing changes in code, configuration and documents, especially if those changes are not done by a single person. We will help you understand how GIT works internally and introduce you to basic GIT commands, that should cover 99% of daily GIT usage.


    KVM - a virtualization infrastructure for the Linux kernel that converts it into a hypervisor

    KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It provides the ability to run complex software stacks, including MS Windows, on powerful, linux-based supercomputer nodes with very low overhead costs. Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. In this lesson we will introduce the QEMU machine emulator using KVM virtualization, learn how to set up the QEMU/KVM on the  Salomon supercomputer, create a QEMU system image, install the operating system into the image, set up the networking and data access  and execute the calculations via the QEMU/KVM. We will also discuss visualized Ethernet networks in the context of QEMU and VDE2.


    Docker & Singularity - technologies for paravirtualization

    Docker for HPC? Yes, Singularity! Docker provides the ability to package and run an application in a loosely isolated environment called a container. The application with all settings and libraries can be packed together and then run on any other computer. It is perfect for developers, hosting providers, server farms, you name it! But, is it so good for supercomputers too? Let's look at why there are some problems with using Docker in shared environment and why Singularity is different. Singularity is developed with HPC in mind and is directly intended for use on HPC clusters with the direct support of technology such as OpenMPI. We will show you how to use Singularity, convert Docker images, create new containers and everything else you need to know about running a Singularity container in an HPC environment.


    EasyBuild - install and build custom programs

    In the supercomputer environment, pre-configured modules of various programs can be used. But how do I get my own program when I am a developer, or I need a program that is not installed? What if I need a different version of the program than the available modules? There is the possibility of sending a request for installation to the support, but what if I need it immediately? There are more options to manually build a program, but not everyone can do it because the user cannot install on the system but only in his own storage, and you need to know the compiler switches and system settings. You can also build a program using the tools that compile for us and create environment setting modules for that. In the lecture, we shall demonstrate how to manually compile a program, how to modify the environment so that the program can be used. We shall show you the EasyBuild tool to build a program and introduce new releases, such as creating Docker or Singularity rules and installing programs using EasyBuild directly into the Singularity image.

    Purpose of the course

    The participants will broaden their range of techniques for efficient use of HPC by mastering modern technologies for code management and execution.

    About the tutors

    The tutors are core members of the Supercomputing Services division of IT4Innovations.

    Branislav Jansík obtained his PhD in computational chemistry at the Royal Institute of Technology, Sweden in 2004. He took a postdoctoral position at IPCF, Consiglio Niazionale delle Ricerche, Italy,  to carry on development and applications of high performance computational methods for molecular optical properties. From 2006 he worked on  the development of highly parallel optimization methods in the domain of electronic structure theory at Aarhus University, Denmark. In 2012 he joined IT4Innovations, the Czech national supercomputing center, as the head of Supercomputing Services. He has published over 35 papers and co-authored the DALTON electronic structure theory code.

    Josef Hrabal obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2014. Since then he has contributed to projects within the University, and in 2017 he joined IT4Innovations as an HPC application specialist.

    David Hrbáč obtained his Master's Degree in Measurement and Control Engineering at VŠB - Technical University of Ostrava in 1997. Since 1994 he has worked for many IT companies as a system architect and CIO. In 2013 he joined IT4Innovations.

    Lukáš Krupčík obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2017. In 2016 he joined IT4Innovations as an HPC application specialist.

    Lubomír Prda obtained his Master's Degree in Information and Communication Technologies at VŠB - Technical University of Ostrava in 2010. Before joining the IT4Innovations team as an HPC specialist in 2016, he worked at the Tieto Corporation as a network engineer, and later moved to identity and access management for the company's nordic and international customers. Lubomír's focus is to manage and maintain the centre's back-end IT infrastructure and services.

    Roman Slíva obtained his Master's Degree in Computer Science at VŠB - Technical University of Ostrava in 1998. He worked as an IT system specialist and project leader in the areas of servers, high performance computing, and storage and backup. From 2007-2009 he led the IT Server Infrastructure group at VŠB - Technical University of Ostrava. In 2011 he joined IT4Innovations as an HPC system specialist and architect.

    events.prace-ri.eu/event/780/
    Oct 24 9:30 to Oct 25 17:00
    Annotation

    This HPC training continues to expand your skills and knowledge in using productivity tools and technologies, to be able to quickly build up an efficient HPC user environment, from scratch, without admin rights. We shall demonstrate tools and procedures tailored to the Salomon and Anselm clusters, however they are easily replicable on any HPC system. The topics include:


    GIT - a version control system for coordinating work among multiple developers

    GIT (the stupid content tracker) is the world's most used Version Control System. Originally designed for development of Linux kernel, it has evolved to universal tool for managing changes in code, configuration and documents, especially if those changes are not done by a single person. We will help you understand how GIT works internally and introduce you to basic GIT commands, that should cover 99% of daily GIT usage.


    KVM - a virtualization infrastructure for the Linux kernel that converts it into a hypervisor

    KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It provides the ability to run complex software stacks, including MS Windows, on powerful, linux-based supercomputer nodes with very low overhead costs. Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. In this lesson we will introduce the QEMU machine emulator using KVM virtualization, learn how to set up the QEMU/KVM on the  Salomon supercomputer, create a QEMU system image, install the operating system into the image, set up the networking and data access  and execute the calculations via the QEMU/KVM. We will also discuss visualized Ethernet networks in the context of QEMU and VDE2.


    Docker & Singularity - technologies for paravirtualization

    Docker for HPC? Yes, Singularity! Docker provides the ability to package and run an application in a loosely isolated environment called a container. The application with all settings and libraries can be packed together and then run on any other computer. It is perfect for developers, hosting providers, server farms, you name it! But, is it so good for supercomputers too? Let's look at why there are some problems with using Docker in shared environment and why Singularity is different. Singularity is developed with HPC in mind and is directly intended for use on HPC clusters with the direct support of technology such as OpenMPI. We will show you how to use Singularity, convert Docker images, create new containers and everything else you need to know about running a Singularity container in an HPC environment.


    EasyBuild - install and build custom programs

    In the supercomputer environment, pre-configured modules of various programs can be used. But how do I get my own program when I am a developer, or I need a program that is not installed? What if I need a different version of the program than the available modules? There is the possibility of sending a request for installation to the support, but what if I need it immediately? There are more options to manually build a program, but not everyone can do it because the user cannot install on the system but only in his own storage, and you need to know the compiler switches and system settings. You can also build a program using the tools that compile for us and create environment setting modules for that. In the lecture, we shall demonstrate how to manually compile a program, how to modify the environment so that the program can be used. We shall show you the EasyBuild tool to build a program and introduce new releases, such as creating Docker or Singularity rules and installing programs using EasyBuild directly into the Singularity image.

    Purpose of the course

    The participants will broaden their range of techniques for efficient use of HPC by mastering modern technologies for code management and execution.

    About the tutors

    The tutors are core members of the Supercomputing Services division of IT4Innovations.

    Branislav Jansík obtained his PhD in computational chemistry at the Royal Institute of Technology, Sweden in 2004. He took a postdoctoral position at IPCF, Consiglio Niazionale delle Ricerche, Italy,  to carry on development and applications of high performance computational methods for molecular optical properties. From 2006 he worked on  the development of highly parallel optimization methods in the domain of electronic structure theory at Aarhus University, Denmark. In 2012 he joined IT4Innovations, the Czech national supercomputing center, as the head of Supercomputing Services. He has published over 35 papers and co-authored the DALTON electronic structure theory code.

    Josef Hrabal obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2014. Since then he has contributed to projects within the University, and in 2017 he joined IT4Innovations as an HPC application specialist.

    David Hrbáč obtained his Master's Degree in Measurement and Control Engineering at VŠB - Technical University of Ostrava in 1997. Since 1994 he has worked for many IT companies as a system architect and CIO. In 2013 he joined IT4Innovations.

    Lukáš Krupčík obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2017. In 2016 he joined IT4Innovations as an HPC application specialist.

    Lubomír Prda obtained his Master's Degree in Information and Communication Technologies at VŠB - Technical University of Ostrava in 2010. Before joining the IT4Innovations team as an HPC specialist in 2016, he worked at the Tieto Corporation as a network engineer, and later moved to identity and access management for the company's nordic and international customers. Lubomír's focus is to manage and maintain the centre's back-end IT infrastructure and services.

    Roman Slíva obtained his Master's Degree in Computer Science at VŠB - Technical University of Ostrava in 1998. He worked as an IT system specialist and project leader in the areas of servers, high performance computing, and storage and backup. From 2007-2009 he led the IT Server Infrastructure group at VŠB - Technical University of Ostrava. In 2011 he joined IT4Innovations as an HPC system specialist and architect.

    events.prace-ri.eu/event/780/
    Oct 24 9:30 to Oct 25 17:00
    26
     
    27
     
    28
     
    29
     
    30
     
    31