• PATCs and PTCs

  • PRACE operates six PRACE Advanced Training Centres (PATCs) since 2012, and they have established a state-of-the-art curriculum for training in HPC and scientific computing. PATCs carry out and coordinate training and education activities that enable both European academic researchers and European industry to utilise the computational infrastructure available through PRACE and provide top-class education and training opportunities for computational scientists in Europe.

    The six PRACE Advanced Training Centres (PATCs) are based at:

    • Barcelona Supercomputing Center (Spain)
    • Consorzio Interuniversitario, CINECA (Italy)
    • CSC – IT Center for Science Ltd (Finland)
    • EPCC at the University of Edinburgh (UK)
    • Gauss Centre for Supercomputing (Germany)
    • Maison de la Simulation (France)

    In addition to operating the PATCs, 4 PRACE Training Centres (PTCs) will be piloted. The PTCs will expand the geographical reach of PATCs by sourcing PATC courses locally, collaborating with PATCs in delivering courses locally or by complementing the PATC programme with local courses.

    The four selected PRACE Training Centers (PTCs) are based at:

    • GRNET – Greek Research and Technology Network (Greece)
    • ICHEC – Irish Centre for High-End Computing (Ireland)
    • IT4I – National Supercomputing Center VSB Technical University of Ostrava (Czech Republic)
    • SURFsara (The Netherlands)

    The following figure depicts the location of the PATC and PTC centers throughout Europe.

    PATC events this month:

    October 2017
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Sisu and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Taito. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PATC course "Advanced Parallel Programming"  (February 2018).

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Agenda

    Day 1, Monday 9.10

       09.00 – 10.30    What is parallel computing?
       10.30 – 10.45    Coffee break
       10.45 – 11.30    OpenMP basic concepts
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.30    Work-sharing constructs
       13.30 – 14.00    Exercises
       14.00 – 14.30    Execution control, library functions
       14.30 – 14.45    Coffee break
       14.45 – 15.30    Exercises
       15.30 – 15.45    Wrap-up and further topics
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 2, Tuesday 10.10

       09.00 – 09.40    Introduction to MPI
       09.40 – 10.00    Exercises
       10.00 – 10.30    Point-to-point communication
       10.30 – 10.45    Coffee break
       10.45 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Collective operations
       13.45 – 14.30    Exercises
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 3, Wednesday 11.10

       09.00 – 09.30    User-defined communicators
       09.30 – 10.30    Exercises
       10.30 – 10.45    Coffee break
       10.45 – 11.30    Non-blocking communication
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Exercises
       13.45 – 14.30    User-defined datatypes
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Lecturers: 

    Sebastian von Alfthan (CSC), Pekka Manninen (CSC)

    Language:   EnglishPrice:          Free of charge

    https://events.prace-ri.eu/event/657/
    Oct 9 8:00 to Oct 11 15:00
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Sisu and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Taito. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PATC course "Advanced Parallel Programming"  (February 2018).

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Agenda

    Day 1, Monday 9.10

       09.00 – 10.30    What is parallel computing?
       10.30 – 10.45    Coffee break
       10.45 – 11.30    OpenMP basic concepts
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.30    Work-sharing constructs
       13.30 – 14.00    Exercises
       14.00 – 14.30    Execution control, library functions
       14.30 – 14.45    Coffee break
       14.45 – 15.30    Exercises
       15.30 – 15.45    Wrap-up and further topics
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 2, Tuesday 10.10

       09.00 – 09.40    Introduction to MPI
       09.40 – 10.00    Exercises
       10.00 – 10.30    Point-to-point communication
       10.30 – 10.45    Coffee break
       10.45 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Collective operations
       13.45 – 14.30    Exercises
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 3, Wednesday 11.10

       09.00 – 09.30    User-defined communicators
       09.30 – 10.30    Exercises
       10.30 – 10.45    Coffee break
       10.45 – 11.30    Non-blocking communication
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Exercises
       13.45 – 14.30    User-defined datatypes
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Lecturers: 

    Sebastian von Alfthan (CSC), Pekka Manninen (CSC)

    Language:   EnglishPrice:          Free of charge

    https://events.prace-ri.eu/event/657/
    Oct 9 8:00 to Oct 11 15:00
    Description

    The course introduces the basics of parallel programming with the message-passing interface (MPI) and OpenMP paradigms. MPI is the dominant parallelization paradigm in high performance computing and enables one to write programs that run on distributed memory machines, such as Sisu and Taito. OpenMP is a threading based approach which enables one to parallelize a program over a single shared memory machine, such as a single node in Taito. The course consists of lectures and hands-on exercises on parallel programming.

    Learning outcome

    After the course the participants should be able to write simple parallel programs and parallelize existing programs with basic features of MPI or OpenMP. This course is also a prerequisite for the PATC course "Advanced Parallel Programming"  (February 2018).

    Prerequisites

    The participants are assumed to have working knowledge of Fortran and/or C programming languages. In addition, fluent operation in a Linux/Unix environment will be assumed.

    Agenda

    Day 1, Monday 9.10

       09.00 – 10.30    What is parallel computing?
       10.30 – 10.45    Coffee break
       10.45 – 11.30    OpenMP basic concepts
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.30    Work-sharing constructs
       13.30 – 14.00    Exercises
       14.00 – 14.30    Execution control, library functions
       14.30 – 14.45    Coffee break
       14.45 – 15.30    Exercises
       15.30 – 15.45    Wrap-up and further topics
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 2, Tuesday 10.10

       09.00 – 09.40    Introduction to MPI
       09.40 – 10.00    Exercises
       10.00 – 10.30    Point-to-point communication
       10.30 – 10.45    Coffee break
       10.45 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Collective operations
       13.45 – 14.30    Exercises
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Day 3, Wednesday 11.10

       09.00 – 09.30    User-defined communicators
       09.30 – 10.30    Exercises
       10.30 – 10.45    Coffee break
       10.45 – 11.30    Non-blocking communication
       11.30 – 12.00    Exercises
       12.00 – 13.00    Lunch
       13.00 – 13.45    Exercises
       13.45 – 14.30    User-defined datatypes
       14.30 – 14.45    Coffee break
       14.45 – 15.45    Exercises
       15.45 – 16.00    Q&A, exercises walkthrough
    Lecturers: 

    Sebastian von Alfthan (CSC), Pekka Manninen (CSC)

    Language:   EnglishPrice:          Free of charge

    https://events.prace-ri.eu/event/657/
    Oct 9 8:00 to Oct 11 15:00
    12
     
    13
     
    14
     
    15
     
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/630/
    Oct 16 8:15 to Oct 20 16:30
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PATC part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PATC part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PATC course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/629/
    Oct 16 8:30 to Oct 20 16:30
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PATC part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PATC part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PATC course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/629/
    Oct 16 8:30 to Oct 20 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/630/
    Oct 16 8:15 to Oct 20 16:30
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PATC part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PATC part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PATC course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/629/
    Oct 16 8:30 to Oct 20 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/630/
    Oct 16 8:15 to Oct 20 16:30
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PATC part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PATC part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PATC course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/629/
    Oct 16 8:30 to Oct 20 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/630/
    Oct 16 8:15 to Oct 20 16:30
    Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners – non-PATC part):
    On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).

    Shared memory parallelization with OpenMP (Tue, for beginners – non-PATC part):
    The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.

    Intermediate and advanced topics in parallel programming (Wed-Fri – PATC course):
    Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/629/
    Oct 16 8:30 to Oct 20 16:30
    The Train the Trainer Program is provided in conjunction with the regular course Parallel Programming with MPI and OpenMP and Advanced Parallel Programming. Whereas the regular course teaches parallel programming, this program is an education for future trainers in parallel programming.
    Too few people can provide parallel programming courses on the level that is needed if scientists and PhD students want to learn how to parallelize a sequential application or to enhance parallel applications. Within Europe, currently only six PATC centres and several other national centres provide such courses on an European or national level. We would like to assist further trainers and centres to also provide such courses for whole Europe or at least within their countries.

    Prerequisites

    You are familiar with parallel programming with MPI and OpenMP on an advanced level and skilled in both programming languages C and Fortran.

    Your goal: You want to provide MPI and OpenMP courses for other scientists and PhD students in your country, i.e., you would like to provide at least the first three days of the regular course as a training block-course to PhD students.

    Background: (a) Your centre supports you to provide such PhD courses in a course room at your centre. The course room is equipped at least with one computer/laptop per two (or three) students and has access to a HPC resource that allows MPI and OpenMP programming in C and Fortran. Or (b), you as a future trainer would like to co-operate with a centre with the necessary course infrastructure.

    What does this Train the Trainer Program provide?

    We provide you with all necessary teaching material on a personal basis, i.e., with the copyright to use it and to provide pdf or paper copies to the students in your PhD courses.
    We provide all exercise material.
    You will listen the lectures that you get familiar with the training material.
    During the exercises, you will help the regular students to correct their errors. The regular students are advised to request help if they were stuck for more than a minute. You will be trained to detect their problems as fast as possible (typically in less than a minute) and to provide the students with the needed help.
     
    The Train the Trainer Program includes the curriculum from Monday until Friday according the course agenda. The Train the Trainer Program starts on Monday with a short introductory meeting at 8:15 am. On Thursday evening we will have an additional meeting and dinner for all participants of this TtT program.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/630/
    Oct 16 8:15 to Oct 20 16:30
    21
     
    22
     
    Description:

    Marconi is the new CINECA class Tier-0 supercomputer, based on the next-generation of the Intel® Xeon Phi™ product family “Knights Landing” alongside with Intel® Xeon® processor E5-2600 v4 product family. It has been co-designed by Cineca on the Lenovo NeXtScale architecture.

    Its full deployment will take place in three steps:

    the first one started mid-2016 and involved an architecture based on Intel Xeon E5-2697 v4 (Broadwell). The second one added Knights Landing to the mix, and was developed within the end of 2016 and the beginning of 2017. The third and final one, about Broadwell'successor Intel Skylake, will be completed within mid-2017, and will let Marconi reach an estimated peak performance of about 20 Pflop/s, with storage capacity of 20 petabytes, while maintaining a low energy consumption.

    This course intends to support the scientific community to efficiently exploit the Marconi system. More precisely, the course aims at providing a full description of the Marconi configuration at Cineca, with special emphasis on main crucial aspects for users and application developers. For instance, details about compilation, debugging and optimization procedures will be provided, together with an overview of the available libraries, tools and applications currently available on the system. Examples of submission jobs will be discussed, together with scheduler (PBS) commands and queue definitions.

    2017 editions of the course will be focused on the next-generation of the Intel® Xeon Phi™ product family available on Marconi.

    NOTE: In this edition of the course, emphasis will be given to the latest partition of Marconi based on Intel Skylake architecture.

    Topics: 

    Overview of Intel’s “Knights Landing” (KNL) processors on Marconi and software (hardware components, network and partitioning, type of nodes and software stack). Developing applications for KNL resources on Marconi  (compilers, libraries, available debugging and profiling tools). Running and monitoring jobs on KNL resources on MARCONI (modules environment @ CINECA, PBS queueing system, job script examples).

    Target audience: 

    Users and developers on the MARCONI Tier-0 system.

    Pre-requisites: 

    Basic knowledge of Linux/UNIX.
     

    Grant
    The lunch will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the ROME area. The grant  will be 100 euros for students working and living outside Italy and 50 euros for students working and living in Italy. Some documentation will be required and the grant will be paid only after a certified presence of the 100% of the lectures.

    Further information about how to request the grant, will be provided at the email confirmation of the course.

     

    https://events.prace-ri.eu/event/658/
    Oct 23 9:00 17:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Building C6, room 106 and 101

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives:  The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment..

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

     

    Day 1 (Monday)

    Session 1 / 10:00 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

    Session 2 / 2:00pm – 5:00 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

    Day 2 (Tuesday)

    Session 1 / 10:00 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

    Session 2 / 2:00 pm - 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 10:00 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

    Session 2 / 2:00 pm - 5:00 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 10:00am – 1:00 pm (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

    Session 2 / 2:00pm – 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 10:00 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

    Session 2 / 2:00pm – 5:00 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

     

    https://events.prace-ri.eu/event/640/
    Oct 23 10:00 to Oct 27 17:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Building C6, room 106 and 101

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives:  The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment..

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

     

    Day 1 (Monday)

    Session 1 / 10:00 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

    Session 2 / 2:00pm – 5:00 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

    Day 2 (Tuesday)

    Session 1 / 10:00 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

    Session 2 / 2:00 pm - 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 10:00 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

    Session 2 / 2:00 pm - 5:00 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 10:00am – 1:00 pm (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

    Session 2 / 2:00pm – 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 10:00 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

    Session 2 / 2:00pm – 5:00 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

     

    https://events.prace-ri.eu/event/640/
    Oct 23 10:00 to Oct 27 17:00
    Presentation of the new Open Power - Nvidia prototype recently installed at Idris (see http://www.idris.fr/ouessant/ for details on the installed machine)

    The training will introduce propspective users to this innovative architecture. It will present the programming models and tools available, as well as hilight the best practices, so as to obtain optimal performance when porting applications. 

    Preliminary program

    Welcome 
    OpenPOWER IBM P8+ architecture 
    New Nvidia GPU Tesla P100 architecture  
    Programing models & Software stack 
    Scientific libraries and  "GPU-Aware" runtimes 
    OpenACC on Ouessant2 [+ Hands-On] 
    OpenMP on Ouessant2 [+ Hands-On] 
    Conclusion  


    https://events.prace-ri.eu/event/665/
    Oct 24 9:30 to Oct 26 12:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Building C6, room 106 and 101

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives:  The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment..

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

     

    Day 1 (Monday)

    Session 1 / 10:00 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

    Session 2 / 2:00pm – 5:00 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

    Day 2 (Tuesday)

    Session 1 / 10:00 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

    Session 2 / 2:00 pm - 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 10:00 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

    Session 2 / 2:00 pm - 5:00 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 10:00am – 1:00 pm (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

    Session 2 / 2:00pm – 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 10:00 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

    Session 2 / 2:00pm – 5:00 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

     

    https://events.prace-ri.eu/event/640/
    Oct 23 10:00 to Oct 27 17:00
    Presentation of the new Open Power - Nvidia prototype recently installed at Idris (see http://www.idris.fr/ouessant/ for details on the installed machine)

    The training will introduce propspective users to this innovative architecture. It will present the programming models and tools available, as well as hilight the best practices, so as to obtain optimal performance when porting applications. 

    Preliminary program

    Welcome 
    OpenPOWER IBM P8+ architecture 
    New Nvidia GPU Tesla P100 architecture  
    Programing models & Software stack 
    Scientific libraries and  "GPU-Aware" runtimes 
    OpenACC on Ouessant2 [+ Hands-On] 
    OpenMP on Ouessant2 [+ Hands-On] 
    Conclusion  


    https://events.prace-ri.eu/event/665/
    Oct 24 9:30 to Oct 26 12:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Building C6, room 106 and 101

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives:  The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment..

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

     

    Day 1 (Monday)

    Session 1 / 10:00 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

    Session 2 / 2:00pm – 5:00 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

    Day 2 (Tuesday)

    Session 1 / 10:00 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

    Session 2 / 2:00 pm - 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 10:00 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

    Session 2 / 2:00 pm - 5:00 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 10:00am – 1:00 pm (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

    Session 2 / 2:00pm – 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 10:00 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

    Session 2 / 2:00pm – 5:00 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

     

    https://events.prace-ri.eu/event/640/
    Oct 23 10:00 to Oct 27 17:00
    Presentation of the new Open Power - Nvidia prototype recently installed at Idris (see http://www.idris.fr/ouessant/ for details on the installed machine)

    The training will introduce propspective users to this innovative architecture. It will present the programming models and tools available, as well as hilight the best practices, so as to obtain optimal performance when porting applications. 

    Preliminary program

    Welcome 
    OpenPOWER IBM P8+ architecture 
    New Nvidia GPU Tesla P100 architecture  
    Programing models & Software stack 
    Scientific libraries and  "GPU-Aware" runtimes 
    OpenACC on Ouessant2 [+ Hands-On] 
    OpenMP on Ouessant2 [+ Hands-On] 
    Conclusion  


    https://events.prace-ri.eu/event/665/
    Oct 24 9:30 to Oct 26 12:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course Convener: Xavier Martorell

    LOCATION: UPC Campus Nord premises.Building C6, room 106 and 101

    Level: 

    Intermediate: For trainees with some theoretical and practical knowledge, some programming experience.

    Advanced: For trainees able to work independently and requiring guidance for solving complex problems.

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C

    Objectives:  The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae. It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment..

    Attendants can bring their own applications and work with them during the course for parallelization and analysis.

    Learning Outcomes:

    The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

    Agenda:

     

    Day 1 (Monday)

    Session 1 / 10:00 am – 1:00 pm (2 h lectures, 1 h practical)

    1. Introduction to parallel architectures, algorithms design and performance parameters

    2. Introduction to the MPI programming model

    3. Practical: How to compile and run MPI applications

    Session 2 / 2:00pm – 5:00 pm (2h lectures, 1h practical)

    1. MPI: Point-to-point communication, collective communication

    2. Practical: Simple matrix computations

    3. MPI: Blocking and non-blocking communications

     

    Day 2 (Tuesday)

    Session 1 / 10:00 am - 1:00 pm (1.5 h lectures, 1.5 h practical)

    1. MPI: Collectives, Communicators, Topologies

    2. Practical: Heat equation example

    Session 2 / 2:00 pm - 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Introduction to Paraver: tool to analyze and understand performance

    2. Practical: Trace generation and trace analysis

     

    Day 3 (Wednesday)

    Session 1 / 10:00 am - 1:00 pm (1 h lecture, 2h practical)

    1. Parallel debugging in MareNostrumIII, options from print to Totalview

    2. Practical: GDB and IDB

    3. Practical: Totalview

    4. Practical: Valgrind for memory leaks

    Session 2 / 2:00 pm - 5:00 pm (2 h lectures, 1 h practical)

    1. Shared-memory programming models, OpenMP fundamentals

    2. Parallel regions and work sharing constructs

    3. Synchronization mechanisms in OpenMP

    4. Practical: heat diffusion in OpenMP

     

    Day 4 (Thursday)

    Session 1 / 10:00am – 1:00 pm (2 h practical, 1 h lectures)

    1. Tasking in OpenMP 3.0/4.0/4.5

    2. Programming using a hybrid MPI/OpenMP approach

    3. Practical: multisort in OpenMP and hybrid MPI/OpenMP

    Session 2 / 2:00pm – 5:00 pm (1.5 h lectures, 1.5 h practical)

    1. Parallware: guided parallelization

    2. Practical session with Parallware examples

     

    Day 5 (Friday)

    Session 1 / 10:00 am – 1:00 pm (2 hour lectures, 1 h practical)

    1. Introduction to the OmpSs programming model

    2. Practical: heat equation example and divide-and-conquer

    Session 2 / 2:00pm – 5:00 pm (1 h lectures, 2 h practical)

    1. Programming using a hybrid MPI/OmpSs approach

    2. Practical: heat equation example and divide-and-conquer

     

    END of COURSE

     

    https://events.prace-ri.eu/event/640/
    Oct 23 10:00 to Oct 27 17:00
    28
     
    29
     
    30
     
    ARCHER has a small Cray XC40 cluster containing Intel's Knights Landing (KNL) processors. This system is intended to allow the UK computational simulation community to test their codes on the KNL processors, optimise codes for this new hardware, and evaluate the suitability of KNL for their applications. In this course we will introduce the KNL system associated with ARCHER, descibe how it can be used, and present the technical details of the KNL processor.

    Details

    Knights Landing is Intel's latest Xeon Phi many-core processor. It offers large amounts of floating point performance (the theoretical peak is 3TFlop/s using double precision data) in a single processor for applications that can use it efficiently. The KNL hardware is significantly different to the previous generation of Xeon Phi, and contains specialised hardware that may beneficial for scientific applications.
    In this course we will describe the KNL hardware, explain how to use the various new features it presents, and explain how to access and use KNL processors through the ARCHER service.
    There will be a number of hands-on practical sessions, and all attendees will be given KNL access for the duration of the course.

    This course is free to all academics.

    Intended learning outcomes

    On completion of this course students should be able to:

    Understand the Knights Landing (KNL) processor.
    Understand how to access and run jobs on the ARCHER KNL cluster.
    Understand the impact of vectorisation on performance on KNL.
    Understand how to check if an application is vectorising and modify applications to improve vectorisation
    Use the different memory available in the nodes.
    Pre-requisites

    Some understanding of at least one of the following programming Languages:

    Fortran, C or C++.
    Timetable

    Day 1

    09:30 - 09:45 : Course introduction
    09:45 - 10:45 : Introduction to the KNL hardware and the ARCHER KNL system
    10:45 - 11:15 : Break
    11:15 - 12:00 : Practical: Running on the ARCHER KNLs
    12:00 - 12:30 : Memory modes programming
    12:30 - 14:00 : Lunch
    14:00 - 15:00 : Practical: Investigating Memory on the KNL
    15:00 - 15:30 : Cluster modes
    15:30 - 16:00 : Break
    16:00 - 17:00 : Vectorisation
    Day 2

    09.30 - 09:45 : Vectorisation recap
    09.45 - 11:00 : Practical: Vectorisation
    11.00 - 11:30 : Break
    11:30 - 12:30 : Serial Optimisation
    12.30 - 14:00 : Lunch
    14.00 - 15:30 : Practical: Serial optimisation
    15.30 - 16:00 : Break
    16.00 - 17:00 : Practical: Continue practicals or bring your own code
    Course Materials

    https://www.archer.ac.uk/training/course-material/2017/10/KNL_Camb/index.php


    https://events.prace-ri.eu/event/671/
    Oct 31 10:00 to Nov 1 18:30
     


    PTC events this month:

    October 2017
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    9
     
    10
     
    11
     
    12
     
    13
     
    14
     
    15
     
    16
     
    17
     
    18
     
    19
     
    20
     
    21
     
    22
     
    23
     
    24
     
    25
     
    26
     
    27
     
    28
     
    29
     
    30
     
    31