• PATC Courses

  • PRACE operates six PRACE Advanced Training Centres (PATCs) at:

    • Barcelona Supercomputing Center (Spain), CINECA
    • Consorzio Interuniversitario (Italy)
    • CSC – IT Center for Science Ltd (Finland)
    • EPCC at the University of Edinburgh (UK)
    • Gauss Centre for Supercomputing (Germany)
    • Maison de la Simulation (France)


    Events in current month:

    July 2017
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    Overview

    In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.


    GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also www.gaspi.de and www.gpi-site.com).
    GASPI is successfully used in academic and industrial simulation applications.


    Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI.
    This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/547/
    Jul 3 9:00 to Jul 4 15:30
    Overview

    In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.


    GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also www.gaspi.de and www.gpi-site.com).
    GASPI is successfully used in academic and industrial simulation applications.


    Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI.
    This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.

    https://events.prace-ri.eu/event/547/
    Jul 3 9:00 to Jul 4 15:30
    5
     
    6
     
    7
     
    8
     
    9
     
    ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we will be running a 2 day ‘Hands-on Introduction to High Performance Computing’ training session.

    This course provides both a general introduction to High Performance Computing (HPC) using the UK national HPC service, ARCHER, as the platform for the exercises.

    On completion of the course, we expect that attendees will be in a position to undertake the ARCHER Driving Test, and potentially qualify for an account and CPU time on ARCHER.

    Familiarity with desktop computers is presumed but no programming, Linux or HPC experience is required. Programmers can however gain extra benefit from the course as source code for all the practicals will be provided.

    Details

    High-performance computing (HPC) is a fundamental technology used in solving scientific problems. Many of the grand challenges of science depend on simulations and models run on HPC facilities to make progress, for example: protein folding, the search for the Higgs boson, and developing nuclear fusion.

    The course will run for 2 days. The first day covers the the basic concepts underlying the drivers for HPC development, HPC hardware, software, programming models and applications. The second day will provide an opportunity for more practical experience, information on performance and the future of HPC. This foundation will give the you ability to appreciate the relevance of HPC in your field and also equip you with the tools to start making effective use of HPC facilities yourself.

    The course is delivered using a mixture of lectures and practical sessions and has a very practical focus. During the practical sessions you will get the chance to use ARCHER with HPC experts on-hand to answer your questions and provide insight.

    This course is free to all academics.

    Intended learning outcomes

    On completion of this course students should be able to explain:

    Why HPC? - What are the drivers and motivation? Who uses it?
    HPC Hardware - Building blocks and architectures
    Parallel computing - Programming models and implementations
    Using HPC systems - Access, compilers, resource allocation and performance
    The Future of HPC
    Undertake the ARCHER Driving Test.
    Pre-requisites

    Attendees are expected to have experience of using desktop computers, but no programming, Linux or HPC experience is necessary.

    Timetable

    Day 1

    09:30  Welcome, Overview and Syllabus
    09:45  LECTURE: Why learn about HPC?
    10:15  LECTURE: Image sharpening
    10:30  PRACTICAL: Sharpen example
    11:00  BREAK: Coffee
    11:30  LECTURE: Parallel Programming
    12:15  PRACTICAL: Sharpen (cont.)
    13:00  BREAK: Lunch
    14:00  LECTURE: Building Blocks (CPU/Memory/Accelerators)
    14:30  LECTURE: Building Blocks (OS/Process/Threads)
    15:00  LECTURE: Fractals
    15:10  PRACTICAL: Fractal example
    15:30  BREAK: Tea
    16:00  LECTURE: Parallel programming models
    16:45  PRACTICAL: Fractals (cont.)
    17:30  Finish

    Day 2

    09:30  LECTURE: HPC Architectures
    10:15  LECTURE: Batch systems
    10:45  PRACTICAL: Computational Fluid Dynamics (CFD)
    11:00  BREAK: Coffee
    11:30  PRACTICAL: CFD (cont.)
    12:30  LECTURE: Compilers
    13:00  BREAK: Lunch
    14:00  PRACTICAL: Compilers (CFD cont.)
    14:30  LECTURE: Parallel Libraries
    15:00  LECTURE: Future of HPC
    15:30  BREAK: Tea
    16:00  LECTURE: Summary
    16:15  PRACTICAL: Finish exercises
    17:00  Finish

    Course Materials

    www.archer.ac.uk/traini.....x.php

    https://events.prace-ri.eu/event/615/
    Jul 10 10:00 to Jul 11 18:30
    ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we will be running a 2 day ‘Hands-on Introduction to High Performance Computing’ training session.

    This course provides both a general introduction to High Performance Computing (HPC) using the UK national HPC service, ARCHER, as the platform for the exercises.

    On completion of the course, we expect that attendees will be in a position to undertake the ARCHER Driving Test, and potentially qualify for an account and CPU time on ARCHER.

    Familiarity with desktop computers is presumed but no programming, Linux or HPC experience is required. Programmers can however gain extra benefit from the course as source code for all the practicals will be provided.

    Details

    High-performance computing (HPC) is a fundamental technology used in solving scientific problems. Many of the grand challenges of science depend on simulations and models run on HPC facilities to make progress, for example: protein folding, the search for the Higgs boson, and developing nuclear fusion.

    The course will run for 2 days. The first day covers the the basic concepts underlying the drivers for HPC development, HPC hardware, software, programming models and applications. The second day will provide an opportunity for more practical experience, information on performance and the future of HPC. This foundation will give the you ability to appreciate the relevance of HPC in your field and also equip you with the tools to start making effective use of HPC facilities yourself.

    The course is delivered using a mixture of lectures and practical sessions and has a very practical focus. During the practical sessions you will get the chance to use ARCHER with HPC experts on-hand to answer your questions and provide insight.

    This course is free to all academics.

    Intended learning outcomes

    On completion of this course students should be able to explain:

    Why HPC? - What are the drivers and motivation? Who uses it?
    HPC Hardware - Building blocks and architectures
    Parallel computing - Programming models and implementations
    Using HPC systems - Access, compilers, resource allocation and performance
    The Future of HPC
    Undertake the ARCHER Driving Test.
    Pre-requisites

    Attendees are expected to have experience of using desktop computers, but no programming, Linux or HPC experience is necessary.

    Timetable

    Day 1

    09:30  Welcome, Overview and Syllabus
    09:45  LECTURE: Why learn about HPC?
    10:15  LECTURE: Image sharpening
    10:30  PRACTICAL: Sharpen example
    11:00  BREAK: Coffee
    11:30  LECTURE: Parallel Programming
    12:15  PRACTICAL: Sharpen (cont.)
    13:00  BREAK: Lunch
    14:00  LECTURE: Building Blocks (CPU/Memory/Accelerators)
    14:30  LECTURE: Building Blocks (OS/Process/Threads)
    15:00  LECTURE: Fractals
    15:10  PRACTICAL: Fractal example
    15:30  BREAK: Tea
    16:00  LECTURE: Parallel programming models
    16:45  PRACTICAL: Fractals (cont.)
    17:30  Finish

    Day 2

    09:30  LECTURE: HPC Architectures
    10:15  LECTURE: Batch systems
    10:45  PRACTICAL: Computational Fluid Dynamics (CFD)
    11:00  BREAK: Coffee
    11:30  PRACTICAL: CFD (cont.)
    12:30  LECTURE: Compilers
    13:00  BREAK: Lunch
    14:00  PRACTICAL: Compilers (CFD cont.)
    14:30  LECTURE: Parallel Libraries
    15:00  LECTURE: Future of HPC
    15:30  BREAK: Tea
    16:00  LECTURE: Summary
    16:15  PRACTICAL: Finish exercises
    17:00  Finish

    Course Materials

    www.archer.ac.uk/traini.....x.php

    https://events.prace-ri.eu/event/615/
    Jul 10 10:00 to Jul 11 18:30
    The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

    Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

    The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

    If you are not already familiar with basic Linux commands,logging on to a remote machine using ssh and compiling and running a program on a remote machine then we would strongly encourage you to also attend the Hands-on Introduction to HPC course running immediately prior to this course.

    This course is free to all academics. 

    Intended Learning Outcomes

    On completion of this course students should be able to: Understand the message-passing model in detail. Implement standard message-passing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Design and implement efficient parallel programs to solve regular-grid problems.

    Pre-requisite Programming Languages:

    Fortran, C or C++. It is not possible to do the exercises in Java.

    Timetable

    Day 1

    09:30  Message-Passing Concepts
    10:15  Practical: Parallel Traffic Modelling
    11:00  Break
    11:30  MPI Programs
    12:00  MPI on ARCHER
    12:15  Practical: Hello World
    13:00  Lunch
    14:00  Point-to-Point Communication
    14:30  Practical: Pi
    15:30  Break
    16:00  Communicators, Tags and Modes
    16:45 Practical: Ping-Pong
    17:30  Finish

    Day 2

    09:30  Non-Blocking Communication
    10:00  Practical: Message Round a Ring
    11:00  Break
    11:30  Collective Communicaton
    12:00  Practical: Collective Communication
    13:00  Lunch
    14:00  Virtual Topologies
    14:30  Practical: Message Round a Ring (cont.)
    15:30  Break
    16:00  Derived Data Types
    16:45  Practical: Message Round a Ring (cont.)
    17:30  Finish

    Day 3

    09:30  Introduction to the Case Study
    10:00  Practical: Case Study
    11:00  Break
    11:30  Practical: Case Study (cont.)
    13:00  Lunch
    14:00  Designing MPI Programs
    15:00 Individual Consultancy Session
    16:00  Finish

    Course Materials

    www.archer.ac.uk/traini.....x.php

    https://events.prace-ri.eu/event/616/
    Jul 12 10:00 to Jul 14 18:30
    The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

    Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

    The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

    If you are not already familiar with basic Linux commands,logging on to a remote machine using ssh and compiling and running a program on a remote machine then we would strongly encourage you to also attend the Hands-on Introduction to HPC course running immediately prior to this course.

    This course is free to all academics. 

    Intended Learning Outcomes

    On completion of this course students should be able to: Understand the message-passing model in detail. Implement standard message-passing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Design and implement efficient parallel programs to solve regular-grid problems.

    Pre-requisite Programming Languages:

    Fortran, C or C++. It is not possible to do the exercises in Java.

    Timetable

    Day 1

    09:30  Message-Passing Concepts
    10:15  Practical: Parallel Traffic Modelling
    11:00  Break
    11:30  MPI Programs
    12:00  MPI on ARCHER
    12:15  Practical: Hello World
    13:00  Lunch
    14:00  Point-to-Point Communication
    14:30  Practical: Pi
    15:30  Break
    16:00  Communicators, Tags and Modes
    16:45 Practical: Ping-Pong
    17:30  Finish

    Day 2

    09:30  Non-Blocking Communication
    10:00  Practical: Message Round a Ring
    11:00  Break
    11:30  Collective Communicaton
    12:00  Practical: Collective Communication
    13:00  Lunch
    14:00  Virtual Topologies
    14:30  Practical: Message Round a Ring (cont.)
    15:30  Break
    16:00  Derived Data Types
    16:45  Practical: Message Round a Ring (cont.)
    17:30  Finish

    Day 3

    09:30  Introduction to the Case Study
    10:00  Practical: Case Study
    11:00  Break
    11:30  Practical: Case Study (cont.)
    13:00  Lunch
    14:00  Designing MPI Programs
    15:00 Individual Consultancy Session
    16:00  Finish

    Course Materials

    www.archer.ac.uk/traini.....x.php

    https://events.prace-ri.eu/event/616/
    Jul 12 10:00 to Jul 14 18:30
    The world’s largest supercomputers are used almost exclusively to run applications which are parallelised using Message Passing. The course covers all the basic knowledge required to write parallel programs using this programming model, and is directly applicable to almost every parallel computer architecture.

    Parallel programming by definition involves co-operation between processes to solve a common task. The programmer has to define the tasks that will be executed by the processors, and also how these tasks are to synchronise and exchange data with one another. In the message-passing model the tasks are separate processes that communicate and synchronise by explicitly sending each other messages. All these parallel operations are performed via calls to some message-passing interface that is entirely responsible for interfacing with the physical communication network linking the actual processors together. This course uses the de facto standard for message passing, the Message Passing Interface (MPI). It covers point-to-point communication, non-blocking operations, derived datatypes, virtual topologies, collective communication and general design issues.

    The course is normally delivered in an intensive three-day format using EPCC’s dedicated training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. This enables lecture material to be supported by the tutored practical sessions in order to reinforce the key concepts.

    If you are not already familiar with basic Linux commands,logging on to a remote machine using ssh and compiling and running a program on a remote machine then we would strongly encourage you to also attend the Hands-on Introduction to HPC course running immediately prior to this course.

    This course is free to all academics. 

    Intended Learning Outcomes

    On completion of this course students should be able to: Understand the message-passing model in detail. Implement standard message-passing algorithms in MPI. Debug simple MPI codes. Measure and comment on the performance of MPI codes. Design and implement efficient parallel programs to solve regular-grid problems.

    Pre-requisite Programming Languages:

    Fortran, C or C++. It is not possible to do the exercises in Java.

    Timetable

    Day 1

    09:30  Message-Passing Concepts
    10:15  Practical: Parallel Traffic Modelling
    11:00  Break
    11:30  MPI Programs
    12:00  MPI on ARCHER
    12:15  Practical: Hello World
    13:00  Lunch
    14:00  Point-to-Point Communication
    14:30  Practical: Pi
    15:30  Break
    16:00  Communicators, Tags and Modes
    16:45 Practical: Ping-Pong
    17:30  Finish

    Day 2

    09:30  Non-Blocking Communication
    10:00  Practical: Message Round a Ring
    11:00  Break
    11:30  Collective Communicaton
    12:00  Practical: Collective Communication
    13:00  Lunch
    14:00  Virtual Topologies
    14:30  Practical: Message Round a Ring (cont.)
    15:30  Break
    16:00  Derived Data Types
    16:45  Practical: Message Round a Ring (cont.)
    17:30  Finish

    Day 3

    09:30  Introduction to the Case Study
    10:00  Practical: Case Study
    11:00  Break
    11:30  Practical: Case Study (cont.)
    13:00  Lunch
    14:00  Designing MPI Programs
    15:00 Individual Consultancy Session
    16:00  Finish

    Course Materials

    www.archer.ac.uk/traini.....x.php

    https://events.prace-ri.eu/event/616/
    Jul 12 10:00 to Jul 14 18:30
    15
     
    16
     
    17
     
    18
     
    19
     
    20
     
    21
     
    22
     
    23
     
    24
     
    25
     
    26
     
    Modern Fortran

    This course provides an introduction to Modern Fortran, which contains many powerful features that make it a suitable language for programming scientific, engineering and numerical applications. Familiarity with a Unix or Linux environment is assumed. The course is open to all, but is mainly targeted at existing ARCHER users.

    Details

    Fortran 90/95 is a modern and efficient general purpose programming language, particularly suited to numeric and scientific computation. The language offers advanced array support, and is complimented by a wealth of numerical libraries. Many large scale computing facilities offer heavily optimised Fortran compilers, making Fortran suitable for the most demanding computational tasks.

    Topics include: fundamentals, program control, input and output, variables, procedures, modules, arrays.

    Intended learning outcomes

    On completion of this course students should be able to:

    Understand and develop modularised Fortran programs.
    Compile and run Fortran programs on ARCHER.
    Prerequisites

    The are no prerequisites for this course, although a familiarity with a Unix or Linux environment is assumed.

    Timetable

    Details are subject to change, but start, end and break times will stay the same.

    Day 1

    09:30 LECTURE: Fundamentals of Computer Programming
    11:00 BREAK: Coffee
    11:30 PRACTICAL: Hello world, formatting, simple input
    12:30 BREAK: Lunch
    13:30 LECTURE: Logical Operations and Control Constructs
    14:30 PRACTICAL: Numeric manipulation
    15:30 BREAK: Tea
    16:00 LECTURE: Arrays
    17:00 PRACTICAL: Arrays
    17:30 CLOSE
    Day 2

    09:30 PRACTICAL: Arrays (cont'd)
    10:15 LECTURE: Procedures
    11:15 BREAK: Coffee
    11:45 PRACTICAL: Procedures
    12:45 BREAK: Lunch
    13:45 LECTURE: Modules and Derived Types
    15:15 BREAK: Tea
    15:45 PRACTICAL: Modules, Types, Portability
    17:00 CLOSE
    Course Materials

    To follow

    Location

    The course will take place in University of Cambridge

    Questions?

    If you have any questions please contact the ARCHER Helpdesk.

    https://events.prace-ri.eu/event/635/
    Jul 27 10:00 to Jul 28 18:30
    Modern Fortran

    This course provides an introduction to Modern Fortran, which contains many powerful features that make it a suitable language for programming scientific, engineering and numerical applications. Familiarity with a Unix or Linux environment is assumed. The course is open to all, but is mainly targeted at existing ARCHER users.

    Details

    Fortran 90/95 is a modern and efficient general purpose programming language, particularly suited to numeric and scientific computation. The language offers advanced array support, and is complimented by a wealth of numerical libraries. Many large scale computing facilities offer heavily optimised Fortran compilers, making Fortran suitable for the most demanding computational tasks.

    Topics include: fundamentals, program control, input and output, variables, procedures, modules, arrays.

    Intended learning outcomes

    On completion of this course students should be able to:

    Understand and develop modularised Fortran programs.
    Compile and run Fortran programs on ARCHER.
    Prerequisites

    The are no prerequisites for this course, although a familiarity with a Unix or Linux environment is assumed.

    Timetable

    Details are subject to change, but start, end and break times will stay the same.

    Day 1

    09:30 LECTURE: Fundamentals of Computer Programming
    11:00 BREAK: Coffee
    11:30 PRACTICAL: Hello world, formatting, simple input
    12:30 BREAK: Lunch
    13:30 LECTURE: Logical Operations and Control Constructs
    14:30 PRACTICAL: Numeric manipulation
    15:30 BREAK: Tea
    16:00 LECTURE: Arrays
    17:00 PRACTICAL: Arrays
    17:30 CLOSE
    Day 2

    09:30 PRACTICAL: Arrays (cont'd)
    10:15 LECTURE: Procedures
    11:15 BREAK: Coffee
    11:45 PRACTICAL: Procedures
    12:45 BREAK: Lunch
    13:45 LECTURE: Modules and Derived Types
    15:15 BREAK: Tea
    15:45 PRACTICAL: Modules, Types, Portability
    17:00 CLOSE
    Course Materials

    To follow

    Location

    The course will take place in University of Cambridge

    Questions?

    If you have any questions please contact the ARCHER Helpdesk.

    https://events.prace-ri.eu/event/635/
    Jul 27 10:00 to Jul 28 18:30
    29
     
    30
     
    31