• PRACE Training Centres (PTCs)

  • PRACE operates ten PRACE Training Centres (PTCs) and they have established a state-of-the-art curriculum for training in HPC and scientific computing. PTCs carry out and coordinate training and education activities that enable both European academic researchers and European industry to utilise the computational infrastructure available through PRACE and provide top-class education and training opportunities for computational scientists in Europe.
    With approximately 100 training events each year, the ten PRACE Training Centres (PTCs) are based at:

    PTC training events are advertised on the following pages. Registration is free and open to all (pending availability):
    https://events.prace-ri.eu/category/2/

    The following figure depicts the location of the PTC centers throughout Europe.
    PATC, PTC location

    PATC events this month:

    October 2019
    Mon Tue Wed Thu Fri Sat Sun
     
    The aim of this course is to give users the best practices to improve their use of the newly installed Prace Irene Joliot-Curie system and to give hints to prepare their codes for future architectures.

    Topics


    Introduction:CEA/TGCC, Irene Joliot Curie supercomputer [CEA]
    Technology: architecures, KNL/Skylake, AMD Rome, IB/BXI  [ATOS/Bull]
    MPI Software: OpenMPI, portals, infiniBand, WI4MPI & nbsp;[EOLEN/AS+]
    User environment: module, collections, flavor/features,toolchains, Hands'on   [EOLEN/AS+]
    Vectorisation: openMP4, simd directives, tools, optimisation [EOLEN/AS+]
    Virtualisation: Pcocc, checkpoint, templates, Hands'on[CEA  / EOLEN ]
    I/O: Posix, StdC, MPI-io, hdf5, Hands'on   [EOLEN/AS+]


    Prerequisites

    Experience with code developpement, knowledge of C or F90, MPI, OpenMP

     
    events.prace-ri.eu/event/890/
    Sep 30 9:00 to Oct 2 17:00
    This two-day workshop will provide an introduction to, and hands on experience working with, the Arm HPC architecture and the accompanying ecosystem.

    Starting with an introduction to current generation Arm HPC architectures, this workshop will provide an opportunity to build and run your own codes on a 64-node (4,000-core) Arm based super computer.

    During the second half of the workshop we will cover more advanced, architecture specific, optimisations including a focus on Arm’s next generation of vectorisation instructions, SVE.

    Training will be provided on the use of the system and the software ecosystem which surrounds it, such as compiler and libraries. Additionally for the SVE evaluation we will cover the use of instruction emulators and simulators for code execution on existing hardware.

    Note: if you wish to attend only one of the two days, please let us know this when completing the registration form, under "Reason for participation".

    For this workshop we will be making use of the Catalyst Fulhame system at EPCC, based on the Marvell ThunderX2 processors.

    Although test codes will be provided for the hands-on exercises, all attendees are encouraged to bring their own applications to work on during the practical sessions.

    Outline timetable

    Day 1


    09:00 - Registration
    09:30 - Welcome and Introduction
    Arm architecture
    Software ecosystem
    Porting  and optimisation
    Access + logging in
    12:30 - Lunch
    13:30 - Hands-on

    Worked examples or own code


    17-00 Finish

    End of day summary




    Day 2


    9:00 - Start
    Introduction to SVE (Scalable Vector Extensions)
    Using SVE
    Advanced SVE
    12:30 - Lunch
    13:30 - Hands-on

    Worked examples or own code


    17:00 - Finish

    End of event summary




     


    events.prace-ri.eu/event/900/
    Sep 30 10:00 to Oct 1 18:00
    The aim of this course is to give users the best practices to improve their use of the newly installed Prace Irene Joliot-Curie system and to give hints to prepare their codes for future architectures.

    Topics


    Introduction:CEA/TGCC, Irene Joliot Curie supercomputer [CEA]
    Technology: architecures, KNL/Skylake, AMD Rome, IB/BXI  [ATOS/Bull]
    MPI Software: OpenMPI, portals, infiniBand, WI4MPI & nbsp;[EOLEN/AS+]
    User environment: module, collections, flavor/features,toolchains, Hands'on   [EOLEN/AS+]
    Vectorisation: openMP4, simd directives, tools, optimisation [EOLEN/AS+]
    Virtualisation: Pcocc, checkpoint, templates, Hands'on[CEA  / EOLEN ]
    I/O: Posix, StdC, MPI-io, hdf5, Hands'on   [EOLEN/AS+]


    Prerequisites

    Experience with code developpement, knowledge of C or F90, MPI, OpenMP

     
    events.prace-ri.eu/event/890/
    Sep 30 9:00 to Oct 2 17:00
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    9
     
    10
     
    11
     
    12
     
    13
     
    Description

    This course gives a thorough introduction to programming GPUs using the directive based OpenACC paradigm. The course consists of lectures and hands-on exercises. Topics of this course include the basic usage of OpenACC, as well as some more advanced issues related to profiling, performance and interoperability with CUDA and MPI.

    Learning outcome

    After the course the participants should have the basic skills needed for utilizing OpenACC in new, or existing programs.

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.


    Agenda

    Day 1, Monday 14.10

    09:00 - 12:00 SESSION 1 & Coffee break (10:00 -10:15)


    Introduction to accelerators​
    Introduction to OpenACC
    Exercises


    12:00 - 13:00 Lunch

    13:00 - 16:00 SESSION 2 & Coffee break (14:15-14:30)


    Data movement
    Exercises


    Day 2, Tuesday 15.10

    09:00 - 12:00 SESSION 3 & Coffee break (10:15-10:30)


    Profilling
    Performance considerations
    Exercises


    12:00 - 13:00 Lunch

    13:00 - 16:00 SESSION 4 & Coffee break (14:00-14:15)


    Asynchronous operations and pipelining
    Interoperability with CUDA and GPU-Accelerated libraries
    Exercises



    Lecturers: 

    Sebastian von Alfthan (CSC), Fredrick Robertsén (CSC)

    Language:  English
    Price:          Free of charge
    events.prace-ri.eu/event/887/
    Oct 14 8:00 to Oct 15 16:00
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?


    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     


    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/895/
    Oct 14 8:15 to Oct 18 17:00
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/894/
    Oct 14 8:30 to Oct 18 16:30
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.


    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE
    events.prace-ri.eu/event/904/
    Oct 14 9:30 to Oct 18 17:30
    Description

    This course gives a thorough introduction to programming GPUs using the directive based OpenACC paradigm. The course consists of lectures and hands-on exercises. Topics of this course include the basic usage of OpenACC, as well as some more advanced issues related to profiling, performance and interoperability with CUDA and MPI.

    Learning outcome

    After the course the participants should have the basic skills needed for utilizing OpenACC in new, or existing programs.

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.


    Agenda

    Day 1, Monday 14.10

    09:00 - 12:00 SESSION 1 & Coffee break (10:00 -10:15)


    Introduction to accelerators​
    Introduction to OpenACC
    Exercises


    12:00 - 13:00 Lunch

    13:00 - 16:00 SESSION 2 & Coffee break (14:15-14:30)


    Data movement
    Exercises


    Day 2, Tuesday 15.10

    09:00 - 12:00 SESSION 3 & Coffee break (10:15-10:30)


    Profilling
    Performance considerations
    Exercises


    12:00 - 13:00 Lunch

    13:00 - 16:00 SESSION 4 & Coffee break (14:00-14:15)


    Asynchronous operations and pipelining
    Interoperability with CUDA and GPU-Accelerated libraries
    Exercises



    Lecturers: 

    Sebastian von Alfthan (CSC), Fredrick Robertsén (CSC)

    Language:  English
    Price:          Free of charge
    events.prace-ri.eu/event/887/
    Oct 14 8:00 to Oct 15 16:00
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/894/
    Oct 14 8:30 to Oct 18 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?


    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     


    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/895/
    Oct 14 8:15 to Oct 18 17:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.


    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE
    events.prace-ri.eu/event/904/
    Oct 14 9:30 to Oct 18 17:30
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/894/
    Oct 14 8:30 to Oct 18 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?


    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     


    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/895/
    Oct 14 8:15 to Oct 18 17:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.


    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE
    events.prace-ri.eu/event/904/
    Oct 14 9:30 to Oct 18 17:30
    Annotation

    This is the third edition of our popular HPC training focused on expanding your skills and knowledge in using productivity tools and technologies, to quickly build up an efficient HPC user environment, from scratch, without admin rights. We shall demonstrate several very useful tools and methods tailored to the current supercomputing facilities at IT4Innovations, but which are also easily replicable on any HPC system.

    The current topics include:



    IT4Innovations supercomputing ecosystem - state of the art



    In this section, the new IT4Innovations computing systems DGX-2 and the newly upgraded Anselm (newAnselm) as well as the upcoming PROJECT storage will be presented. New technologies available with the DGX-2 and the newAnselm machines will be introduced from a practical standpoint. This includes the new generation processors, the Smart Burst Buffer acceleration to newAnselm's SCRATCH storage, the NVMe storage, the tensor processing capability of the V100 graphics cards, the NVlink and the  Unified memory.  The outlook for the new supercomputers available within the EuroHPC will be given along with updates to user lifecycle policies that accompany the deployment of all the new hardware at IT4Innovations.



     GIT technologies - coordinating work among multiple developers



    GIT is the world's most used Version Control System. Originally designed for development of the Linux kernel, it has evolved into a universal tool for managing changes in code, configuration and documents, especially if those changes are not done by a single person. We will help you understand how GIT works internally and introduce you to basic GIT commands that may cover up to 99 % of daily GIT usage.

    Another section will demonstrate handling of project web pages with GIT. You will learn how to create a web site with GIT and Gitlab Pages. To publish a web site with Pages, you can use any Static Site Generator (SSG), such as MKDocs, Jekyll, Hugo, Middleman, just to name a few. You can also publish any website written directly in plain HTML, CSS, and JavaScript. You can also enable HTTPS on the site.



    HPC containers - paravirtualization technology



    Recently Nvidia chose containers as a software distribution platform for their supercomputer, the DGX-2. We will show you how to use containers using Singularity, convert Docker images, create new containers, and everything else you need to know about a Singularity container in an HPC environment.

    Level

    beginner (40%) - intermediate (50 %) - advanced (10 %)

    Language

    English

    Purpose of the course

    The participants will broaden their range of techniques for efficient use of HPC by mastering modern technologies for code management and execution.

    About the tutors

    The tutors are core members of the Supercomputing Services division of IT4Innovations.

    Branislav Jansík obtained his PhD in computational chemistry at the Royal Institute of Technology, Sweden in 2004. He took a postdoctoral position at IPCF, Consiglio Niazionale delle Ricerche, Italy, to carry on development and applications of high performance computational methods for molecular optical properties. From 2006 he worked on the development of highly parallel optimization methods in the domain of electronic structure theory at Aarhus University, Denmark. In 2012 he joined IT4Innovations, the Czech national supercomputing centre, as the head of Supercomputing Services. He has published over 35 papers and co-authored the DALTON electronic structure theory code.

    Josef Hrabal obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2014. Since then he has contributed to projects within the University, and in 2017 he joined IT4Innovations as an HPC application specialist.

    David Hrbáč obtained his Master's Degree in Measurement and Control Engineering at VŠB - Technical University of Ostrava in 1997. Since 1994 he has worked for many IT companies as a system architect and CIO. In 2013 he joined IT4Innovations.

    Lukáš Krupčík obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2017. In 2016 he joined IT4Innovations as an HPC application specialist.

    Lubomír Prda obtained his Master's Degree in Information and Communication Technologies at VŠB - Technical University of Ostrava in 2010. Before joining the IT4Innovations team as an HPC specialist in 2016, he worked at the Tieto Corporation as a network engineer, and later moved to identity and access management for the company's Nordic and international customers. Lubomír's focus is to manage and maintain the centre's back-end IT infrastructure and services.
    events.prace-ri.eu/event/918/
    Oct 16 9:30 to Oct 17 17:00
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/894/
    Oct 14 8:30 to Oct 18 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?


    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     


    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/895/
    Oct 14 8:15 to Oct 18 17:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.


    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE
    events.prace-ri.eu/event/904/
    Oct 14 9:30 to Oct 18 17:30
    Annotation

    This is the third edition of our popular HPC training focused on expanding your skills and knowledge in using productivity tools and technologies, to quickly build up an efficient HPC user environment, from scratch, without admin rights. We shall demonstrate several very useful tools and methods tailored to the current supercomputing facilities at IT4Innovations, but which are also easily replicable on any HPC system.

    The current topics include:



    IT4Innovations supercomputing ecosystem - state of the art



    In this section, the new IT4Innovations computing systems DGX-2 and the newly upgraded Anselm (newAnselm) as well as the upcoming PROJECT storage will be presented. New technologies available with the DGX-2 and the newAnselm machines will be introduced from a practical standpoint. This includes the new generation processors, the Smart Burst Buffer acceleration to newAnselm's SCRATCH storage, the NVMe storage, the tensor processing capability of the V100 graphics cards, the NVlink and the  Unified memory.  The outlook for the new supercomputers available within the EuroHPC will be given along with updates to user lifecycle policies that accompany the deployment of all the new hardware at IT4Innovations.



     GIT technologies - coordinating work among multiple developers



    GIT is the world's most used Version Control System. Originally designed for development of the Linux kernel, it has evolved into a universal tool for managing changes in code, configuration and documents, especially if those changes are not done by a single person. We will help you understand how GIT works internally and introduce you to basic GIT commands that may cover up to 99 % of daily GIT usage.

    Another section will demonstrate handling of project web pages with GIT. You will learn how to create a web site with GIT and Gitlab Pages. To publish a web site with Pages, you can use any Static Site Generator (SSG), such as MKDocs, Jekyll, Hugo, Middleman, just to name a few. You can also publish any website written directly in plain HTML, CSS, and JavaScript. You can also enable HTTPS on the site.



    HPC containers - paravirtualization technology



    Recently Nvidia chose containers as a software distribution platform for their supercomputer, the DGX-2. We will show you how to use containers using Singularity, convert Docker images, create new containers, and everything else you need to know about a Singularity container in an HPC environment.

    Level

    beginner (40%) - intermediate (50 %) - advanced (10 %)

    Language

    English

    Purpose of the course

    The participants will broaden their range of techniques for efficient use of HPC by mastering modern technologies for code management and execution.

    About the tutors

    The tutors are core members of the Supercomputing Services division of IT4Innovations.

    Branislav Jansík obtained his PhD in computational chemistry at the Royal Institute of Technology, Sweden in 2004. He took a postdoctoral position at IPCF, Consiglio Niazionale delle Ricerche, Italy, to carry on development and applications of high performance computational methods for molecular optical properties. From 2006 he worked on the development of highly parallel optimization methods in the domain of electronic structure theory at Aarhus University, Denmark. In 2012 he joined IT4Innovations, the Czech national supercomputing centre, as the head of Supercomputing Services. He has published over 35 papers and co-authored the DALTON electronic structure theory code.

    Josef Hrabal obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2014. Since then he has contributed to projects within the University, and in 2017 he joined IT4Innovations as an HPC application specialist.

    David Hrbáč obtained his Master's Degree in Measurement and Control Engineering at VŠB - Technical University of Ostrava in 1997. Since 1994 he has worked for many IT companies as a system architect and CIO. In 2013 he joined IT4Innovations.

    Lukáš Krupčík obtained his Master's Degree in Computer Science and Technology at VŠB - Technical University of Ostrava in 2017. In 2016 he joined IT4Innovations as an HPC application specialist.

    Lubomír Prda obtained his Master's Degree in Information and Communication Technologies at VŠB - Technical University of Ostrava in 2010. Before joining the IT4Innovations team as an HPC specialist in 2016, he worked at the Tieto Corporation as a network engineer, and later moved to identity and access management for the company's Nordic and international customers. Lubomír's focus is to manage and maintain the centre's back-end IT infrastructure and services.
    events.prace-ri.eu/event/918/
    Oct 16 9:30 to Oct 17 17:00
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PRACE part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PRACE part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PRACE course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/894/
    Oct 14 8:30 to Oct 18 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?


    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     


    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/895/
    Oct 14 8:15 to Oct 18 17:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Vertex Building, Room VS208

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.


    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives: 

    The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview.

    The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

    Day 1 (Monday)

    Session 1 / 9:30 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

     

    Session 2 / 2:00pm – 5:30 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

     Day 2 (Tuesday)

    Session 1 / 9:30 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

     

    Session 2 / 2:00 pm - 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 9:30 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

     

    Session 2 / 2:00 pm - 5:30 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 9:30am – 1:00 pm  (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

     

    Session 2 / 2:00pm – 5:30 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 9:30 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

     

    Session 2 / 2:00pm – 5:30 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE
    events.prace-ri.eu/event/904/
    Oct 14 9:30 to Oct 18 17:30
    19
     
    20
     
    21
     
    22
     
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Puhti and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Puhti. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PTC course "Advanced Parallel Programming" in 2020.

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Program

    Day 1, Wednesday 23.10


    09:00-10:30 What is parallel computing
    10:30-10:45 Coffee break
    10:45-11:30 Introduction to MPI
    11:30-12.00 Exercises
    12:00-13:00 Lunch
    13:00-14:00 Point-to-point communication
    14:00-16:00 Exercises

    Day 2, Thursday 24.10


    09:00-09:45 Collective communication
    09:45-10:30 Exercises
    10:30-10:45 Coffee break
    10:45-11:15 User-defined communicators
    11:15-12:00 Exercises
    12:00-13:00 Lunch
    13:00-13:45 Non-blocking communication
    13:45-14:15 Exercises
    14:15-14:30 Coffee break
    14:30-15:15 User-defined data types
    15:15-16:00 Exercises

    Day 3, Friday 25.10


    09:00-09:45 Introduction to OpenMP
    09:45-10:30 Exercises
    10:30-10:45 Coffee break
    10:45-11:15 Work-sharing constructs and reductions
    11:15-12:00 Exercises
    12:00-13:00 Lunch
    13:00-13:45 Synchronization
    13:45-14:30 Exercises
    14:30-14:45 Coffee break
    14:45-15:15 Tasks
    15:15-16:00 Exercises

    Lecturers: 

    Jussi Enkovaara (CSC), Sami Ilvonen (CSC)

    Language:   English
    Price:          Free of charge
    events.prace-ri.eu/event/915/
    Oct 23 8:00 to Oct 25 15:00
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Puhti and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Puhti. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PTC course "Advanced Parallel Programming" in 2020.

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Program

    Day 1, Wednesday 23.10


    09:00-10:30 What is parallel computing
    10:30-10:45 Coffee break
    10:45-11:30 Introduction to MPI
    11:30-12.00 Exercises
    12:00-13:00 Lunch
    13:00-14:00 Point-to-point communication
    14:00-16:00 Exercises

    Day 2, Thursday 24.10


    09:00-09:45 Collective communication
    09:45-10:30 Exercises
    10:30-10:45 Coffee break
    10:45-11:15 User-defined communicators
    11:15-12:00 Exercises
    12:00-13:00 Lunch
    13:00-13:45 Non-blocking communication
    13:45-14:15 Exercises
    14:15-14:30 Coffee break
    14:30-15:15 User-defined data types
    15:15-16:00 Exercises

    Day 3, Friday 25.10


    09:00-09:45 Introduction to OpenMP
    09:45-10:30 Exercises
    10:30-10:45 Coffee break
    10:45-11:15 Work-sharing constructs and reductions
    11:15-12:00 Exercises
    12:00-13:00 Lunch
    13:00-13:45 Synchronization
    13:45-14:30 Exercises
    14:30-14:45 Coffee break
    14:45-15:15 Tasks
    15:15-16:00 Exercises

    Lecturers: 

    Jussi Enkovaara (CSC), Sami Ilvonen (CSC)

    Language:   English
    Price:          Free of charge
    events.prace-ri.eu/event/915/
    Oct 23 8:00 to Oct 25 15:00
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Puhti and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Puhti. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PTC course "Advanced Parallel Programming" in 2020.

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Program

    Day 1, Wednesday 23.10


    09:00-10:30 What is parallel computing
    10:30-10:45 Coffee break
    10:45-11:30 Introduction to MPI
    11:30-12.00 Exercises
    12:00-13:00 Lunch
    13:00-14:00 Point-to-point communication
    14:00-16:00 Exercises

    Day 2, Thursday 24.10


    09:00-09:45 Collective communication
    09:45-10:30 Exercises
    10:30-10:45 Coffee break
    10:45-11:15 User-defined communicators
    11:15-12:00 Exercises
    12:00-13:00 Lunch
    13:00-13:45 Non-blocking communication
    13:45-14:15 Exercises
    14:15-14:30 Coffee break
    14:30-15:15 User-defined data types
    15:15-16:00 Exercises

    Day 3, Friday 25.10


    09:00-09:45 Introduction to OpenMP
    09:45-10:30 Exercises
    10:30-10:45 Coffee break
    10:45-11:15 Work-sharing constructs and reductions
    11:15-12:00 Exercises
    12:00-13:00 Lunch
    13:00-13:45 Synchronization
    13:45-14:30 Exercises
    14:30-14:45 Coffee break
    14:45-15:15 Tasks
    15:15-16:00 Exercises

    Lecturers: 

    Jussi Enkovaara (CSC), Sami Ilvonen (CSC)

    Language:   English
    Price:          Free of charge
    events.prace-ri.eu/event/915/
    Oct 23 8:00 to Oct 25 15:00
    26
     
    27
     
    28
     

    General Information

    Data Carpentry develops and teaches workshops on the fundamental data skills needed to conduct research. Its target audience is researchers who have little to no prior computational experience, and its lessons are domain specific, building on learners' existing knowledge to enable them to quickly apply skills learned to their own research. Participants will be encouraged to help one another and to apply what they have learned to their own research problems.

    For more information on what we teach and why, please see our paper "Good Enough Practices for Scientific Computing".

    Organisers: This workshop is provided by EPCC, Edinburgh Parallel Computing Centre, and organised in collaboration with PRACE and the Software Sustainability Institute.

    PRACE Advanced Training Centres (PATCs) carry out and coordinate training and education activities that enable both European academic researchers and European industry to utilise the computational infrastructure available through PRACE.
    The long-term vision is that PATCs will become the hubs and key drivers of European high-performance computing education.

    The Software Sustanability Institute's mission is to cultivate better, more sustainable, research software to enable world-class research (better software, better research). Software is fundamental to research: seven out of ten UK researchers report that their work would be impossible without it.

    Who: The course is aimed at graduate students and other researchers. You don't need to have any previous knowledge of the tools that will be presented at the workshop.

    Where: Julian Hodge Building, Training Room 2, Colum Drive, Cardiff, CF10 3EU
    www.cardiff.ac.uk/visi.....lding

    When: 29-30 October, 2019.

    Requirements: Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on. They should have a few specific software packages installed (listed below). They are also required to abide by Data Carpentry's Code of Conduct.

    Accessibility: We are committed to making this workshop accessible to everybody. The workshop organizers have checked that:


    The room is wheelchair / scooter accessible.
    Accessible restrooms are available.


    Materials will be provided in advance of the workshop and large-print handouts are available if needed by notifying the organizers in advance. If we can help making learning easier for you (e.g. sign-language interpreters, lactation facilities) please get in touch (using contact details below) and we will attempt to provide them.

    Trainer


    Juan Rodriguez Herrera

    Juan is involved in teaching ARCHER courses related to HPC, MPI, and Python, among others. He's a certified instructor of Software and Data Carpentry workshops. He also supervises EPCC's MSc dissertation projects.

    Contact: Please email training@epcc.ed.ac.uk for more information.

    Registration: Registration has been closed as the course is full with a long waiting list.


    Surveys

    Please be sure to complete these surveys before and after the workshop.

    Pre-workshop Survey

    Post-workshop Survey

    Further details including timetable, sylabus, and setup at hpcarcher.github.io/2019-10-29-cardiff/

     

    events.prace-ri.eu/event/923/
    Oct 29 10:00 to Oct 30 18:30

    General Information

    Data Carpentry develops and teaches workshops on the fundamental data skills needed to conduct research. Its target audience is researchers who have little to no prior computational experience, and its lessons are domain specific, building on learners' existing knowledge to enable them to quickly apply skills learned to their own research. Participants will be encouraged to help one another and to apply what they have learned to their own research problems.

    For more information on what we teach and why, please see our paper "Good Enough Practices for Scientific Computing".

    Organisers: This workshop is provided by EPCC, Edinburgh Parallel Computing Centre, and organised in collaboration with PRACE and the Software Sustainability Institute.

    PRACE Advanced Training Centres (PATCs) carry out and coordinate training and education activities that enable both European academic researchers and European industry to utilise the computational infrastructure available through PRACE.
    The long-term vision is that PATCs will become the hubs and key drivers of European high-performance computing education.

    The Software Sustanability Institute's mission is to cultivate better, more sustainable, research software to enable world-class research (better software, better research). Software is fundamental to research: seven out of ten UK researchers report that their work would be impossible without it.

    Who: The course is aimed at graduate students and other researchers. You don't need to have any previous knowledge of the tools that will be presented at the workshop.

    Where: Julian Hodge Building, Training Room 2, Colum Drive, Cardiff, CF10 3EU
    www.cardiff.ac.uk/visi.....lding

    When: 29-30 October, 2019.

    Requirements: Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on. They should have a few specific software packages installed (listed below). They are also required to abide by Data Carpentry's Code of Conduct.

    Accessibility: We are committed to making this workshop accessible to everybody. The workshop organizers have checked that:


    The room is wheelchair / scooter accessible.
    Accessible restrooms are available.


    Materials will be provided in advance of the workshop and large-print handouts are available if needed by notifying the organizers in advance. If we can help making learning easier for you (e.g. sign-language interpreters, lactation facilities) please get in touch (using contact details below) and we will attempt to provide them.

    Trainer


    Juan Rodriguez Herrera

    Juan is involved in teaching ARCHER courses related to HPC, MPI, and Python, among others. He's a certified instructor of Software and Data Carpentry workshops. He also supervises EPCC's MSc dissertation projects.

    Contact: Please email training@epcc.ed.ac.uk for more information.

    Registration: Registration has been closed as the course is full with a long waiting list.


    Surveys

    Please be sure to complete these surveys before and after the workshop.

    Pre-workshop Survey

    Post-workshop Survey

    Further details including timetable, sylabus, and setup at hpcarcher.github.io/2019-10-29-cardiff/

     

    events.prace-ri.eu/event/923/
    Oct 29 10:00 to Oct 30 18:30
    With the advance of new technologies, data volumes and number of files are constantly increasing. Additionally, new regulations (e.g. GDPR) sets strict requirements on the storage and use of privacy sensitive data. Data management has therefore become an essential part of data-driven research.

     

    In this course we will introduce how to manage data efficiently with the data management framework iRODS (integrated Rule-Oriented Data System) and to build computational pipelines on an HPC infrastructure employing this data. Topics in this course will include:

    - Data Life Cycle and FAIR principles: how to make data Findable, Accessible, Interoperable and Reusable

    - iRODS: basic concepts and graphical user interface

    - how to label and search for data in iRODS

    - how to build a computational pipeline that operates on data managed in iRODS.
    events.prace-ri.eu/event/899/
    Oct 30 9:00 17:30
    This course provides an introduction to High-Performance Computing (HPC) for researchers in the life sciences, using ARCHER as a platform for hands-on training exercises.

    The course is organised and funded by BioExcel - the Centre of Excellence for Computational Biomolecular Research (bioexcel.eu) and PRACE, and delivered in collaboration with ARCHER - the UK national supercomputing service (archer.ac.uk).

    Overview

    High-performance computing (HPC) is a fundamental technology used to solve a wide range of scientific research problems. Many important challenges in science such as protein folding, the search for the Higgs boson, drug discovery, and the development of nuclear fusion all depend on simulations, models and analyses run on HPC facilities to make progress.
     
    This course introduces HPC to life science researchers, focusing on the aspects that are most important for those new to this technology to understand. It will help you judge how HPC can best benefit your research, and equip you to go on to successfully and efficiently make use of HPC facilities in future. The course will cover basic concepts in HPC hardware, software, user environments, filesystems, and programming models. It also provides an opportunity to gain hands-on practical experience and assistance using an HPC system (ARCHER, the UK national supercomputing service) through examples drawn from the life sciences, such as biomolecular simulation.

    Registration

    Registration is on a first-come, first-served basis via events.prace-ri.eu/eve.....ions/

    Learning outcomes

    On completion of the course, we expect that attendees will understand and be able to explain:
     
        • Why HPC? - What are the drivers and motivation? Who uses it and why?
        • The UK & EU HPC landscape - HPC facilities available to researchers
        • HPC hardware - Building blocks and architectures
        • Parallel computing - Programming models and implementations
        • Using HPC systems
            • Access
            • Batch schedulers & resource allocation
            • Running jobs
            • Dealing with errors
            • Compiling code
            • Using libraries
            • Performance
        • The Future of HPC

    Pre-requisites

    This course follows on naturally from the BioExcel Summer School on Foundation skills for HPC in computational biomolecular research (bioexcel.eu/events/bioexcel-summer-school/)

    Familiarity with basic Linux commands (at the level of being able to navigate a file system) is recommended. You may find a Linux 'cheat sheet' such as www.archer.ac.uk/docume.....ckref useful if you are less familiar with Linux.

    No programming skills or previous HPC experience is required.

    Laptop computers will be available, however you are encouraged to bring your own laptop (running Windows, Linux, or macOS) as you will find it useful to learn how to set this up to connect to ARCHER (with assistance from course helpers if needed) and perform the hands-on practicals.

    Timetable

    Day 1

    10:00 - Welcome, introduction, and course overview
    What can BioExcel do for me?
    Familiarisation with fellow attendees

    11:00 - LECTURE - What is HPC?
    11:25 - PRACTICAL - Connecting to ARCHER
    11:30 - BREAK - Coffee & Tea
    12:00 - PRACTICAL - Sequence Alignment using HMMER
    13:00 - BREAK - Lunch
    14:00 - LECTURE - Parallel Computing Patterns
    14:30 - LECTURE - Measuring Parallel Performance
    15:00 - PRACTICAL - Sequence Alignment using HMMER
    15:30 - BREAK - Coffee & Tea
    16:00 - PRACTICAL - Sequence alignment using HMMER
    16:15 - LECTURE - Building Blocks - Software (Operating System, Processes and Threads)
    16:45 - LECTURE - Building Blocks - Hardware (Processors/CPUs/cores, Memory, Accelerators)
    17:15 - Review of the day
    17:30 - Finish

    Day 2

    9:30 - Summary of day 1
    9:45 - LECTURE - Parallel Models
    10:30 - PRACTICAL - Molecular Dynamics using GROMACS
    11:00 - BREAK - Coffee & Tea
    11:30 - PRACTICAL - Molecular Dynamics using GROMACS
    12:00 - LECTURE - HPC Architectures
    12:30 - LECTURE - Batch Systems & Parallel Application Launchers
    13:00 - BREAK - Lunch
    14:00 - PRACTICAL - Molecular Dynamics using GROMACS
    14:30 - LECTURE - Compilers and Building Software
    15:00 - BREAK - Coffee & Tea
    15:30 - PRACTICAL - QM/MM simulation using CP2K
    16:30 - LECTURE - Parallel libraries
    17:00 - Review of the day
    17:15 - Finish

    Day 3

    9:30 - Summary of day 2
    9:45 - LECTURE - Pipelines and workflows
    10:15 - PRACTICAL - QM/MM simulation using CP2K
    11:00 - LECTURE - The UK & EU HPC Landscape
    11:30 - BREAK - Coffee & Tea
    12:00 - LECTURE - The Future of HPC
    12:30 - LECTURE - "Where next?" and things to remember
    13:00 - Lunch
    14:00 - Individual consultations, course review and feedback survey
    15:00 - Finish

    Course Materials

    www.archer.ac.uk/train.....x.php

    Course Dinner

    A course dinner will be scheduled for the Wednesday or Thursday evening and advertised at a later date

    Travel Grant

    BioExcel will be providing a limited number of fixed amount travel bursaries for this event. If you would like to be considered for a travel bursary, application instructions as well as eligibility criteria and conditions for the travel grants are available through the link below.

    events.prace-ri.eu/eve.....C.pdf

    Deadline for Travel Grant applications is 30th September 2019.

    Any questions about the travel grants please email Michelle Mendonca (info@bioexcel.eu).

    Accommodation

    Participants are responsible for booking their own accommodation and travel.

    On-campus accommodation can be found at conferences.bham.ac.uk.....mpus/. Hotels in the centre of Birmingham are also an option as the University's Edgbaston campus (closest train station "University") is only a short (~10 minute) train ride from Birmingham New Street station.
    events.prace-ri.eu/event/840/
    Oct 30 11:00 to Nov 1 16:00
    This course provides an introduction to High-Performance Computing (HPC) for researchers in the life sciences, using ARCHER as a platform for hands-on training exercises.

    The course is organised and funded by BioExcel - the Centre of Excellence for Computational Biomolecular Research (bioexcel.eu) and PRACE, and delivered in collaboration with ARCHER - the UK national supercomputing service (archer.ac.uk).

    Overview

    High-performance computing (HPC) is a fundamental technology used to solve a wide range of scientific research problems. Many important challenges in science such as protein folding, the search for the Higgs boson, drug discovery, and the development of nuclear fusion all depend on simulations, models and analyses run on HPC facilities to make progress.
     
    This course introduces HPC to life science researchers, focusing on the aspects that are most important for those new to this technology to understand. It will help you judge how HPC can best benefit your research, and equip you to go on to successfully and efficiently make use of HPC facilities in future. The course will cover basic concepts in HPC hardware, software, user environments, filesystems, and programming models. It also provides an opportunity to gain hands-on practical experience and assistance using an HPC system (ARCHER, the UK national supercomputing service) through examples drawn from the life sciences, such as biomolecular simulation.

    Registration

    Registration is on a first-come, first-served basis via events.prace-ri.eu/eve.....ions/

    Learning outcomes

    On completion of the course, we expect that attendees will understand and be able to explain:
     
        • Why HPC? - What are the drivers and motivation? Who uses it and why?
        • The UK & EU HPC landscape - HPC facilities available to researchers
        • HPC hardware - Building blocks and architectures
        • Parallel computing - Programming models and implementations
        • Using HPC systems
            • Access
            • Batch schedulers & resource allocation
            • Running jobs
            • Dealing with errors
            • Compiling code
            • Using libraries
            • Performance
        • The Future of HPC

    Pre-requisites

    This course follows on naturally from the BioExcel Summer School on Foundation skills for HPC in computational biomolecular research (bioexcel.eu/events/bioexcel-summer-school/)

    Familiarity with basic Linux commands (at the level of being able to navigate a file system) is recommended. You may find a Linux 'cheat sheet' such as www.archer.ac.uk/docume.....ckref useful if you are less familiar with Linux.

    No programming skills or previous HPC experience is required.

    Laptop computers will be available, however you are encouraged to bring your own laptop (running Windows, Linux, or macOS) as you will find it useful to learn how to set this up to connect to ARCHER (with assistance from course helpers if needed) and perform the hands-on practicals.

    Timetable

    Day 1

    10:00 - Welcome, introduction, and course overview
    What can BioExcel do for me?
    Familiarisation with fellow attendees

    11:00 - LECTURE - What is HPC?
    11:25 - PRACTICAL - Connecting to ARCHER
    11:30 - BREAK - Coffee & Tea
    12:00 - PRACTICAL - Sequence Alignment using HMMER
    13:00 - BREAK - Lunch
    14:00 - LECTURE - Parallel Computing Patterns
    14:30 - LECTURE - Measuring Parallel Performance
    15:00 - PRACTICAL - Sequence Alignment using HMMER
    15:30 - BREAK - Coffee & Tea
    16:00 - PRACTICAL - Sequence alignment using HMMER
    16:15 - LECTURE - Building Blocks - Software (Operating System, Processes and Threads)
    16:45 - LECTURE - Building Blocks - Hardware (Processors/CPUs/cores, Memory, Accelerators)
    17:15 - Review of the day
    17:30 - Finish

    Day 2

    9:30 - Summary of day 1
    9:45 - LECTURE - Parallel Models
    10:30 - PRACTICAL - Molecular Dynamics using GROMACS
    11:00 - BREAK - Coffee & Tea
    11:30 - PRACTICAL - Molecular Dynamics using GROMACS
    12:00 - LECTURE - HPC Architectures
    12:30 - LECTURE - Batch Systems & Parallel Application Launchers
    13:00 - BREAK - Lunch
    14:00 - PRACTICAL - Molecular Dynamics using GROMACS
    14:30 - LECTURE - Compilers and Building Software
    15:00 - BREAK - Coffee & Tea
    15:30 - PRACTICAL - QM/MM simulation using CP2K
    16:30 - LECTURE - Parallel libraries
    17:00 - Review of the day
    17:15 - Finish

    Day 3

    9:30 - Summary of day 2
    9:45 - LECTURE - Pipelines and workflows
    10:15 - PRACTICAL - QM/MM simulation using CP2K
    11:00 - LECTURE - The UK & EU HPC Landscape
    11:30 - BREAK - Coffee & Tea
    12:00 - LECTURE - The Future of HPC
    12:30 - LECTURE - "Where next?" and things to remember
    13:00 - Lunch
    14:00 - Individual consultations, course review and feedback survey
    15:00 - Finish

    Course Materials

    www.archer.ac.uk/train.....x.php

    Course Dinner

    A course dinner will be scheduled for the Wednesday or Thursday evening and advertised at a later date

    Travel Grant

    BioExcel will be providing a limited number of fixed amount travel bursaries for this event. If you would like to be considered for a travel bursary, application instructions as well as eligibility criteria and conditions for the travel grants are available through the link below.

    events.prace-ri.eu/eve.....C.pdf

    Deadline for Travel Grant applications is 30th September 2019.

    Any questions about the travel grants please email Michelle Mendonca (info@bioexcel.eu).

    Accommodation

    Participants are responsible for booking their own accommodation and travel.

    On-campus accommodation can be found at conferences.bham.ac.uk.....mpus/. Hotels in the centre of Birmingham are also an option as the University's Edgbaston campus (closest train station "University") is only a short (~10 minute) train ride from Birmingham New Street station.
    events.prace-ri.eu/event/840/
    Oct 30 11:00 to Nov 1 16:00
    Apache Spark is an open-source framework for cluster computing, ideal for large-scale parallel data processing, that is designed for performance and ease-of-use. It is faster and simpler to use than Hadoop MapReduce, providing a rich set of APIs in Python, Java and Scala.

    This hands-on course will cover the following topics:


    Introduction to Spark
    Map, Filter and Reduce
    Running on a Spark Cluster
    Key-value pairs
    Correlations, logistic regression
    Decision trees, K-means


    Sessions

    10:00 - 17:30 (Thu)
    10:00 - 15:30 (Fri)

    Attendees will be provided with access to EPCC's Tier2 Cirrus system for all practical exercises.

    The practicals will be done using Jupyter notebooks so a basic knowledge of Python would be extremely useful.

    Registration: Registration has been closed as the course is full with a long waiting list.

    Timetable

    Full timetable and course materials to follow

    Course materials from a previous run of this course. 

     
    events.prace-ri.eu/event/922/
    Oct 31 11:00 to Nov 1 18:30
     

     


    PTC events this month:

    October 2019
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    9
     
    10
     
    11
     
    12
     
    13
     
    14
     
    15
     
    16
     
    17
     
    18
     
    19
     
    20
     
    21
     
    22
     
    23
     
    24
     
    25
     
    26
     
    27
     
    28
     
    29
     
    30
     
    31