• PRACE Training Centres (PTCs)

  • PRACE operates ten PRACE Training Centres (PTCs) and they have established a state-of-the-art curriculum for training in HPC and scientific computing. PTCs carry out and coordinate training and education activities that enable both European academic researchers and European industry to utilise the computational infrastructure available through PRACE and provide top-class education and training opportunities for computational scientists in Europe.
    With approximately 100 training events each year, the ten PRACE Training Centres (PTCs) are based at:

    PTC training events are advertised on the following pages. Registration is free and open to all (pending availability):
    https://events.prace-ri.eu/category/2/

    The following figure depicts the location of the PTC centers throughout Europe.

    PATC events this month:

    January 2020
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    GPU Programming with CUDA

    Graphics Processing Units (GPUs) were originally developed for computer gaming and other graphical tasks, but for many years have been exploited for general purpose computing in a number of areas. They offer advantages over traditional CPUs because they have greater computational capability, and use high-bandwidth memory systems (memory bandwidth is the main bottleneck for many scientific applications).

    Trainer


    Kevin Stratford

    Kevin has a background in computational physics and joined EPCC in 2001. He teaches on courses including 'Scientific Programming with Python' and 'GPU Programming with CUDA'.

     


    Rupert Nash

    Rupert is an experienced trainer who works with CFD, C++ and GPUs, and who teaches courses including 'Modern C++' and 'GPU Programming with CUDA'.

     

    Details

    This introductory course will describe GPUs, and the advantages they offer.

    It will teach participants how to start to program GPUs, which cannot be used in isolation, but are usually used in conjunction with CPUs.

    Important issues affecting performance will be covered.

    The course focuses on NVIDIA GPUs, and the CUDA programming language (an extension to C/C++ or Fortran). Please note the course is aimed at application programmers; it does not consider machine learning or any of the packages available in the machine learning arena.

    Hands-on practical sessions are included.

    You will require your laptop, and your institutional credentials to connect to eduroam. The training parctical exercises will be run on a web-based system so all you will need is a relatively recent web browser (Firefox, Chrome and Safari are known to work).

    This course is free to attend.

    Timetable

    Provisional timetable based on previous run - may be subject to change.

    Day 1


    10:00 Introduction
    10:20 GPU Concepts/Architectures
    11:00 Break
    11:20 CUDA Programming
    12:00 A first CUDA exercise
    13:00 Lunch
    14:00 CUDA Optimisations
    14:20 Optimisation Exercise
    15:00 Break
    15:20 Constant and Shared Memory
    16:00 Exercise
    17:00 Close


    Day 2


    10:00 Recap
    10:30 OpenCL and OpenACC directives
    11:00 Break
    11:20 OpenCL and / or Directives Exercises
    12:00 Guest Lecture Alan Gray (NVIDiA) Overview of NVIDIA Volta
    13:00 Lunch
    14:00 Performance portability and Kokkos
    14:30 Exercise: Getting started with Kokkos patterns
    15:00 Break
    15:10 Kokkos memory management
    15:30 Memory management exercises
    16:00 Close


    Course Materials

    Slides and exercise material for this course will be available soon.  Materials from a previous run can be seen here.

    Location

    The course will be held at EPCC, University of Edinburgh

    Registration

    Please use the registration page to register for this course.

    Questions?

    If you have any questions please contact the ARCHER Helpdesk.
    events.prace-ri.eu/event/935/
    Jan 9 10:00 to Jan 10 18:00
    GPU Programming with CUDA

    Graphics Processing Units (GPUs) were originally developed for computer gaming and other graphical tasks, but for many years have been exploited for general purpose computing in a number of areas. They offer advantages over traditional CPUs because they have greater computational capability, and use high-bandwidth memory systems (memory bandwidth is the main bottleneck for many scientific applications).

    Trainer


    Kevin Stratford

    Kevin has a background in computational physics and joined EPCC in 2001. He teaches on courses including 'Scientific Programming with Python' and 'GPU Programming with CUDA'.

     


    Rupert Nash

    Rupert is an experienced trainer who works with CFD, C++ and GPUs, and who teaches courses including 'Modern C++' and 'GPU Programming with CUDA'.

     

    Details

    This introductory course will describe GPUs, and the advantages they offer.

    It will teach participants how to start to program GPUs, which cannot be used in isolation, but are usually used in conjunction with CPUs.

    Important issues affecting performance will be covered.

    The course focuses on NVIDIA GPUs, and the CUDA programming language (an extension to C/C++ or Fortran). Please note the course is aimed at application programmers; it does not consider machine learning or any of the packages available in the machine learning arena.

    Hands-on practical sessions are included.

    You will require your laptop, and your institutional credentials to connect to eduroam. The training parctical exercises will be run on a web-based system so all you will need is a relatively recent web browser (Firefox, Chrome and Safari are known to work).

    This course is free to attend.

    Timetable

    Provisional timetable based on previous run - may be subject to change.

    Day 1


    10:00 Introduction
    10:20 GPU Concepts/Architectures
    11:00 Break
    11:20 CUDA Programming
    12:00 A first CUDA exercise
    13:00 Lunch
    14:00 CUDA Optimisations
    14:20 Optimisation Exercise
    15:00 Break
    15:20 Constant and Shared Memory
    16:00 Exercise
    17:00 Close


    Day 2


    10:00 Recap
    10:30 OpenCL and OpenACC directives
    11:00 Break
    11:20 OpenCL and / or Directives Exercises
    12:00 Guest Lecture Alan Gray (NVIDiA) Overview of NVIDIA Volta
    13:00 Lunch
    14:00 Performance portability and Kokkos
    14:30 Exercise: Getting started with Kokkos patterns
    15:00 Break
    15:10 Kokkos memory management
    15:30 Memory management exercises
    16:00 Close


    Course Materials

    Slides and exercise material for this course will be available soon.  Materials from a previous run can be seen here.

    Location

    The course will be held at EPCC, University of Edinburgh

    Registration

    Please use the registration page to register for this course.

    Questions?

    If you have any questions please contact the ARCHER Helpdesk.
    events.prace-ri.eu/event/935/
    Jan 9 10:00 to Jan 10 18:00
    11
     
    12
     
    13
     
    14
     
    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Persistent memory, such as Intel's Optane DCPMM, is now available for use in systems and will be included in future exascale deployments such as the DoE Aurora system. This new form of memory requires both different programming approaches to exploit the persistent functionality and storage performance and redesign of applications to benefit from the full performance of the hardware.

    This online course aims to educate participants on the persistent memory hardware currently available, the software methods to exploit such hardware, and the choices that users of systems and system designers have when deciding what persistent memory functionality and configurations to utilize.

    The course will provide hands-on experience on programming persistent memory along with a wealth of information on the hardware and software ecosystem and potential performance and functionality benefits. We will be using an HPC system that has compute nodes with Optane memory for the tutorial practicals.

     

    Trainer


    Adrian Jackson

    Adrian Jackson is a Research Architect at EPCC, where he works on a range of different research, from investigating new memory hardware and programming models, to optimising and porting parallel codes, and working with application scientists to enable their computational simulation or data analysis. He also teaches on EPCC's MSc in HPC, giving lectures on Programming Skills, HPC Architecture, and Performance Programming.

     

     

    Format

    This online course will run over two sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 15th January and ending on Wed 22nd January 2020.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ

    Logging on to the NEXTGenIO prototype system is very similar, but substitute hydra-vpn.epcc.ed.ac.uk as the login address, followed by nextgenio-login1.

    We will provide accounts on the NEXTGenIO system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials will be available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/949/
    Jan 15 15:00 to Jan 22 17:30
    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Persistent memory, such as Intel's Optane DCPMM, is now available for use in systems and will be included in future exascale deployments such as the DoE Aurora system. This new form of memory requires both different programming approaches to exploit the persistent functionality and storage performance and redesign of applications to benefit from the full performance of the hardware.

    This online course aims to educate participants on the persistent memory hardware currently available, the software methods to exploit such hardware, and the choices that users of systems and system designers have when deciding what persistent memory functionality and configurations to utilize.

    The course will provide hands-on experience on programming persistent memory along with a wealth of information on the hardware and software ecosystem and potential performance and functionality benefits. We will be using an HPC system that has compute nodes with Optane memory for the tutorial practicals.

     

    Trainer


    Adrian Jackson

    Adrian Jackson is a Research Architect at EPCC, where he works on a range of different research, from investigating new memory hardware and programming models, to optimising and porting parallel codes, and working with application scientists to enable their computational simulation or data analysis. He also teaches on EPCC's MSc in HPC, giving lectures on Programming Skills, HPC Architecture, and Performance Programming.

     

     

    Format

    This online course will run over two sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 15th January and ending on Wed 22nd January 2020.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ

    Logging on to the NEXTGenIO prototype system is very similar, but substitute hydra-vpn.epcc.ed.ac.uk as the login address, followed by nextgenio-login1.

    We will provide accounts on the NEXTGenIO system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials will be available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/949/
    Jan 15 15:00 to Jan 22 17:30
    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Persistent memory, such as Intel's Optane DCPMM, is now available for use in systems and will be included in future exascale deployments such as the DoE Aurora system. This new form of memory requires both different programming approaches to exploit the persistent functionality and storage performance and redesign of applications to benefit from the full performance of the hardware.

    This online course aims to educate participants on the persistent memory hardware currently available, the software methods to exploit such hardware, and the choices that users of systems and system designers have when deciding what persistent memory functionality and configurations to utilize.

    The course will provide hands-on experience on programming persistent memory along with a wealth of information on the hardware and software ecosystem and potential performance and functionality benefits. We will be using an HPC system that has compute nodes with Optane memory for the tutorial practicals.

     

    Trainer


    Adrian Jackson

    Adrian Jackson is a Research Architect at EPCC, where he works on a range of different research, from investigating new memory hardware and programming models, to optimising and porting parallel codes, and working with application scientists to enable their computational simulation or data analysis. He also teaches on EPCC's MSc in HPC, giving lectures on Programming Skills, HPC Architecture, and Performance Programming.

     

     

    Format

    This online course will run over two sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 15th January and ending on Wed 22nd January 2020.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ

    Logging on to the NEXTGenIO prototype system is very similar, but substitute hydra-vpn.epcc.ed.ac.uk as the login address, followed by nextgenio-login1.

    We will provide accounts on the NEXTGenIO system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials will be available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/949/
    Jan 15 15:00 to Jan 22 17:30
    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Persistent memory, such as Intel's Optane DCPMM, is now available for use in systems and will be included in future exascale deployments such as the DoE Aurora system. This new form of memory requires both different programming approaches to exploit the persistent functionality and storage performance and redesign of applications to benefit from the full performance of the hardware.

    This online course aims to educate participants on the persistent memory hardware currently available, the software methods to exploit such hardware, and the choices that users of systems and system designers have when deciding what persistent memory functionality and configurations to utilize.

    The course will provide hands-on experience on programming persistent memory along with a wealth of information on the hardware and software ecosystem and potential performance and functionality benefits. We will be using an HPC system that has compute nodes with Optane memory for the tutorial practicals.

     

    Trainer


    Adrian Jackson

    Adrian Jackson is a Research Architect at EPCC, where he works on a range of different research, from investigating new memory hardware and programming models, to optimising and porting parallel codes, and working with application scientists to enable their computational simulation or data analysis. He also teaches on EPCC's MSc in HPC, giving lectures on Programming Skills, HPC Architecture, and Performance Programming.

     

     

    Format

    This online course will run over two sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 15th January and ending on Wed 22nd January 2020.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ

    Logging on to the NEXTGenIO prototype system is very similar, but substitute hydra-vpn.epcc.ed.ac.uk as the login address, followed by nextgenio-login1.

    We will provide accounts on the NEXTGenIO system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials will be available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/949/
    Jan 15 15:00 to Jan 22 17:30
    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Persistent memory, such as Intel's Optane DCPMM, is now available for use in systems and will be included in future exascale deployments such as the DoE Aurora system. This new form of memory requires both different programming approaches to exploit the persistent functionality and storage performance and redesign of applications to benefit from the full performance of the hardware.

    This online course aims to educate participants on the persistent memory hardware currently available, the software methods to exploit such hardware, and the choices that users of systems and system designers have when deciding what persistent memory functionality and configurations to utilize.

    The course will provide hands-on experience on programming persistent memory along with a wealth of information on the hardware and software ecosystem and potential performance and functionality benefits. We will be using an HPC system that has compute nodes with Optane memory for the tutorial practicals.

     

    Trainer


    Adrian Jackson

    Adrian Jackson is a Research Architect at EPCC, where he works on a range of different research, from investigating new memory hardware and programming models, to optimising and porting parallel codes, and working with application scientists to enable their computational simulation or data analysis. He also teaches on EPCC's MSc in HPC, giving lectures on Programming Skills, HPC Architecture, and Performance Programming.

     

     

    Format

    This online course will run over two sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 15th January and ending on Wed 22nd January 2020.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ

    Logging on to the NEXTGenIO prototype system is very similar, but substitute hydra-vpn.epcc.ed.ac.uk as the login address, followed by nextgenio-login1.

    We will provide accounts on the NEXTGenIO system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials will be available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/949/
    Jan 15 15:00 to Jan 22 17:30
    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Persistent memory, such as Intel's Optane DCPMM, is now available for use in systems and will be included in future exascale deployments such as the DoE Aurora system. This new form of memory requires both different programming approaches to exploit the persistent functionality and storage performance and redesign of applications to benefit from the full performance of the hardware.

    This online course aims to educate participants on the persistent memory hardware currently available, the software methods to exploit such hardware, and the choices that users of systems and system designers have when deciding what persistent memory functionality and configurations to utilize.

    The course will provide hands-on experience on programming persistent memory along with a wealth of information on the hardware and software ecosystem and potential performance and functionality benefits. We will be using an HPC system that has compute nodes with Optane memory for the tutorial practicals.

     

    Trainer


    Adrian Jackson

    Adrian Jackson is a Research Architect at EPCC, where he works on a range of different research, from investigating new memory hardware and programming models, to optimising and porting parallel codes, and working with application scientists to enable their computational simulation or data analysis. He also teaches on EPCC's MSc in HPC, giving lectures on Programming Skills, HPC Architecture, and Performance Programming.

     

     

    Format

    This online course will run over two sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 15th January and ending on Wed 22nd January 2020.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ

    Logging on to the NEXTGenIO prototype system is very similar, but substitute hydra-vpn.epcc.ed.ac.uk as the login address, followed by nextgenio-login1.

    We will provide accounts on the NEXTGenIO system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials will be available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/949/
    Jan 15 15:00 to Jan 22 17:30
    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Persistent memory, such as Intel's Optane DCPMM, is now available for use in systems and will be included in future exascale deployments such as the DoE Aurora system. This new form of memory requires both different programming approaches to exploit the persistent functionality and storage performance and redesign of applications to benefit from the full performance of the hardware.

    This online course aims to educate participants on the persistent memory hardware currently available, the software methods to exploit such hardware, and the choices that users of systems and system designers have when deciding what persistent memory functionality and configurations to utilize.

    The course will provide hands-on experience on programming persistent memory along with a wealth of information on the hardware and software ecosystem and potential performance and functionality benefits. We will be using an HPC system that has compute nodes with Optane memory for the tutorial practicals.

     

    Trainer


    Adrian Jackson

    Adrian Jackson is a Research Architect at EPCC, where he works on a range of different research, from investigating new memory hardware and programming models, to optimising and porting parallel codes, and working with application scientists to enable their computational simulation or data analysis. He also teaches on EPCC's MSc in HPC, giving lectures on Programming Skills, HPC Architecture, and Performance Programming.

     

     

    Format

    This online course will run over two sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 15th January and ending on Wed 22nd January 2020.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ

    Logging on to the NEXTGenIO prototype system is very similar, but substitute hydra-vpn.epcc.ed.ac.uk as the login address, followed by nextgenio-login1.

    We will provide accounts on the NEXTGenIO system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials will be available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/949/
    Jan 15 15:00 to Jan 22 17:30
    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Persistent memory, such as Intel's Optane DCPMM, is now available for use in systems and will be included in future exascale deployments such as the DoE Aurora system. This new form of memory requires both different programming approaches to exploit the persistent functionality and storage performance and redesign of applications to benefit from the full performance of the hardware.

    This online course aims to educate participants on the persistent memory hardware currently available, the software methods to exploit such hardware, and the choices that users of systems and system designers have when deciding what persistent memory functionality and configurations to utilize.

    The course will provide hands-on experience on programming persistent memory along with a wealth of information on the hardware and software ecosystem and potential performance and functionality benefits. We will be using an HPC system that has compute nodes with Optane memory for the tutorial practicals.

     

    Trainer


    Adrian Jackson

    Adrian Jackson is a Research Architect at EPCC, where he works on a range of different research, from investigating new memory hardware and programming models, to optimising and porting parallel codes, and working with application scientists to enable their computational simulation or data analysis. He also teaches on EPCC's MSc in HPC, giving lectures on Programming Skills, HPC Architecture, and Performance Programming.

     

     

    Format

    This online course will run over two sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 15th January and ending on Wed 22nd January 2020.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ

    Logging on to the NEXTGenIO prototype system is very similar, but substitute hydra-vpn.epcc.ed.ac.uk as the login address, followed by nextgenio-login1.

    We will provide accounts on the NEXTGenIO system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials will be available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/949/
    Jan 15 15:00 to Jan 22 17:30
    23
     
    24
     
    25
     
    26
     

    Thank you to those of you who have already registered for our PTC "Python in HPC @ CSC" training course!

    The course is now fully booked! If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc at csc.fi

    We have processed a LIMITED WAITING LIST. Please send your request to patc at csc.fi, and we will keep those on the waiting list informed of whether participation will be possible.

    Please note that registration/wait list is at a first come, first served basis. Welcome to the course!

    Description

    Python programming language has become popular in scientific computing due to many benefits it offers for fast code development. Unfortunately, the performance of pure Python programs is often sub-optimal, but fortunately this can be easily remedied. In this course we teach various ways to optimise and parallelise Python programs. Among the topics are performance analysis, efficient use of NumPy arrays, extending Python with more efficient languages (Cython), and parallel computing with  message passing (mpi4py) approach.

    Learning outcome

    After the course participants are able to


    analyse performance of Python program and use NumPy more efficiently
    optimize Python programs with Cython
    utilize external libraries in Python programs
    write simple parallel programs with Python


    Prerequisites

    Participants need some experience in Python programming, but expertise is not required. One should be familiar with


    Python syntax
    Basic builtin datastructures (lists, tuples, dictionaries)
    Control structures (if-else, for, while)
    Writing functions and modules


    Some previous experience on NumPy will be useful, but not strictly required.

    Agenda

    Day 1, Monday 27.1



    Efficient use of NumPy


    Performance analysis



    Day 2, Tuesday 28.1



    Optimisation with Cython


    Interfacing with external libraries



    Day 3, Wednesday 29.1



    Parallel computing with mpi4py






    Lecturers: 

    Jussi Enkovaara (CSC), Martti Louhivuori (CSC)

    Language:   English
    Price:           Free of charge
    events.prace-ri.eu/event/963/
    Jan 27 8:00 to Jan 29 15:00
    Numerical simulations conducted on current high-performance computing (HPC) systems face an ever growing need for scalability. Larger HPC platforms provide opportunities to push the limitations on size and properties of what can be accurately simulated. Therefore, it is needed to process larger data sets, be it reading input data or writing results. Serial approaches on handling I/O in a parallel application will dominate the performance on massively parallel systems, leaving a lot of computing resources idle during those serial application phases.

    In addition to the need for parallel I/O, input and output data is often processed on different platforms. Heterogeneity of platforms can impose a high level of maintenance, when different data representations are needed. Portable, selfdescribing data formats such as HDF5 and netCDF are examples of already widely used data formats within certain communities.

    This course will start with an introduction to the basics of I/O, including basic I/O-relevant terms, an overview over parallel file systems with a focus on GPFS, and the HPC hardware available at JSC. Different I/O strategies will be presented. The course will introduce the use of the HDF5, the NetCDF and the SIONlib library interfaces as well as MPI-I/O. Optimization potential and best practices are discussed.

    Instructors: Sebastian Lührs, Benedikt Steinbusch, Jülich Supercomputing Centre

    Contact
    For any questions concerning the course please send an e-mail to s.luehrs@fz-juelich.de.
    events.prace-ri.eu/event/961/
    Jan 27 9:00 to Jan 29 16:30
    Advanced MPI

    ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we will be running a 2-day Advanced MPI training session.

    Trainer

    David Henty

    David teaches on a wide range of EPCC's technical training courses, including MPI and OpenMP, and is overall course organiser for EPCC's MSc in High Performance Computing.

    Details

    This course is aimed at programmers seeking to deepen their understanding of MPI and explore some of its more recent and advanced features. We cover topics including communicator management, non-blocking and neighbourhood collectives, single-sided MPI and the new MPI memory model. We also look at performance aspects such as which MPI routines to use for scalability, overlapping communication and calculation and MPI internal implementation issues.

    Intended learning outcomes


    Understanding of how internal MPI implementation details affect performance
    Familiarity with neighbourhood collective operations in MPI
    Knowledge of MPI memory models for RMA operations
    Familiarity with MPI RMA operations and single-sided communication
    Understanding of best practice for MPI+OpenMP programming


    Pre-requisites

    Attendees should be familiar with MPI programming in C, C++ or Fortran, e.g. have attended the ARCHER MPI course.

    Pre-course setup

    All attendees should bring their own wireless-enabled laptop set up with the required software. Practical exercises will be done using a guest account on ARCHER.

    Timetable

    All sessions will include hands-on practical exercises in addition to lectures material.

    Day 1: 27th January

         09:30 - 10:00 Registration
         10:00 - 10:30 MPI Quiz
         10:30 - 11:00 MPI Internals
         11:00 - 11:30 Coffee
         11:30 - 13:00 Point-to-point Performance
         13:00 - 14:00 Lunch
         14:00 - 15:30 MPI Optimisations
         15:30 - 16:00 Coffee
         16:00 - 17:00 Advanced Collectives
         17:00 CLOSE

    Day 2: 28th January

         10:00 - 11:00 MPI + OpenMP (i)
         11:00 - 11:30 Coffee
         11:30 - 13:00 MPI + OpenMP (ii)
         13:00 - 14:00 Lunch
         14:00 - 15:30 New MPI shared-memory model
         15:30 - 16:00 Coffee
         16:00 - 17:00 Finish Exercises
         17:00 CLOSE

    Course Materials

    Slides and exercise material for this course.
    events.prace-ri.eu/event/948/
    Jan 27 11:00 to Jan 28 18:00
    Advanced MPI

    ARCHER, the UK's national supercomputing service, offers training in software development and high-performance computing to scientists and researchers across the UK. As part of our training service we will be running a 2-day Advanced MPI training session.

    Trainer

    David Henty

    David teaches on a wide range of EPCC's technical training courses, including MPI and OpenMP, and is overall course organiser for EPCC's MSc in High Performance Computing.

    Details

    This course is aimed at programmers seeking to deepen their understanding of MPI and explore some of its more recent and advanced features. We cover topics including communicator management, non-blocking and neighbourhood collectives, single-sided MPI and the new MPI memory model. We also look at performance aspects such as which MPI routines to use for scalability, overlapping communication and calculation and MPI internal implementation issues.

    Intended learning outcomes


    Understanding of how internal MPI implementation details affect performance
    Familiarity with neighbourhood collective operations in MPI
    Knowledge of MPI memory models for RMA operations
    Familiarity with MPI RMA operations and single-sided communication
    Understanding of best practice for MPI+OpenMP programming


    Pre-requisites

    Attendees should be familiar with MPI programming in C, C++ or Fortran, e.g. have attended the ARCHER MPI course.

    Pre-course setup

    All attendees should bring their own wireless-enabled laptop set up with the required software. Practical exercises will be done using a guest account on ARCHER.

    Timetable

    All sessions will include hands-on practical exercises in addition to lectures material.

    Day 1: 27th January

         09:30 - 10:00 Registration
         10:00 - 10:30 MPI Quiz
         10:30 - 11:00 MPI Internals
         11:00 - 11:30 Coffee
         11:30 - 13:00 Point-to-point Performance
         13:00 - 14:00 Lunch
         14:00 - 15:30 MPI Optimisations
         15:30 - 16:00 Coffee
         16:00 - 17:00 Advanced Collectives
         17:00 CLOSE

    Day 2: 28th January

         10:00 - 11:00 MPI + OpenMP (i)
         11:00 - 11:30 Coffee
         11:30 - 13:00 MPI + OpenMP (ii)
         13:00 - 14:00 Lunch
         14:00 - 15:30 New MPI shared-memory model
         15:30 - 16:00 Coffee
         16:00 - 17:00 Finish Exercises
         17:00 CLOSE

    Course Materials

    Slides and exercise material for this course.
    events.prace-ri.eu/event/948/
    Jan 27 11:00 to Jan 28 18:00
    Numerical simulations conducted on current high-performance computing (HPC) systems face an ever growing need for scalability. Larger HPC platforms provide opportunities to push the limitations on size and properties of what can be accurately simulated. Therefore, it is needed to process larger data sets, be it reading input data or writing results. Serial approaches on handling I/O in a parallel application will dominate the performance on massively parallel systems, leaving a lot of computing resources idle during those serial application phases.

    In addition to the need for parallel I/O, input and output data is often processed on different platforms. Heterogeneity of platforms can impose a high level of maintenance, when different data representations are needed. Portable, selfdescribing data formats such as HDF5 and netCDF are examples of already widely used data formats within certain communities.

    This course will start with an introduction to the basics of I/O, including basic I/O-relevant terms, an overview over parallel file systems with a focus on GPFS, and the HPC hardware available at JSC. Different I/O strategies will be presented. The course will introduce the use of the HDF5, the NetCDF and the SIONlib library interfaces as well as MPI-I/O. Optimization potential and best practices are discussed.

    Instructors: Sebastian Lührs, Benedikt Steinbusch, Jülich Supercomputing Centre

    Contact
    For any questions concerning the course please send an e-mail to s.luehrs@fz-juelich.de.
    events.prace-ri.eu/event/961/
    Jan 27 9:00 to Jan 29 16:30

    Thank you to those of you who have already registered for our PTC "Python in HPC @ CSC" training course!

    The course is now fully booked! If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc at csc.fi

    We have processed a LIMITED WAITING LIST. Please send your request to patc at csc.fi, and we will keep those on the waiting list informed of whether participation will be possible.

    Please note that registration/wait list is at a first come, first served basis. Welcome to the course!

    Description

    Python programming language has become popular in scientific computing due to many benefits it offers for fast code development. Unfortunately, the performance of pure Python programs is often sub-optimal, but fortunately this can be easily remedied. In this course we teach various ways to optimise and parallelise Python programs. Among the topics are performance analysis, efficient use of NumPy arrays, extending Python with more efficient languages (Cython), and parallel computing with  message passing (mpi4py) approach.

    Learning outcome

    After the course participants are able to


    analyse performance of Python program and use NumPy more efficiently
    optimize Python programs with Cython
    utilize external libraries in Python programs
    write simple parallel programs with Python


    Prerequisites

    Participants need some experience in Python programming, but expertise is not required. One should be familiar with


    Python syntax
    Basic builtin datastructures (lists, tuples, dictionaries)
    Control structures (if-else, for, while)
    Writing functions and modules


    Some previous experience on NumPy will be useful, but not strictly required.

    Agenda

    Day 1, Monday 27.1



    Efficient use of NumPy


    Performance analysis



    Day 2, Tuesday 28.1



    Optimisation with Cython


    Interfacing with external libraries



    Day 3, Wednesday 29.1



    Parallel computing with mpi4py






    Lecturers: 

    Jussi Enkovaara (CSC), Martti Louhivuori (CSC)

    Language:   English
    Price:           Free of charge
    events.prace-ri.eu/event/963/
    Jan 27 8:00 to Jan 29 15:00
    Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course convener: Rosa Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department

    Lecturers: 

    Rosa M Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department, BSC

    Javier Conejero, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC

    Jorge Ejarque, Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC

    Daniele Lezzi, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC

    Objectives: The objective of this course is to give an overview of the COMPSs programming model, which is able to exploit the inherent concurrency of sequential applications and execute them in a transparent manner to the application developer in distributed computing platform. This is achieved by annotating part of the code as tasks, and building at execution a task-dependence graph based on the actual data used consumed/produced by the tasks. The COMPSs runtime is able to schedule the tasks in the computing nodes and take into account facts like data locality and the different nature of the computing nodes in case of heterogeneous platforms. Additionally, recently COMPSs has been enhanced with the possibility of coordinating Web Services as part of the applications. COMPSs supports Java, C/C++ and Python as programming languages.

    Learning Outcomes:  In the course, the COMPSs syntax, programming methodology and an overview of the runtime internals will be given. The attendees will get a first lesson about programming with COMPSs that will enable them to start programming with this framework.

    A hands-on with simple introductory exercises will be also performed. The students who finish this course will be able to develop simple COMPSs applications and to run them both in a local resource and in a distributed platform (initially in a private cloud). The exercises will be delivered in Python and Java. In case of Python, Jupyter notebooks will be used in some of the exercises.

    Level: for trainees with some theoretical and practical knowledge.

    INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course

    ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems

    Prerequisites: Programming skills in Java and Python 

     

    Agenda: 











    Day 1 (January 28)

    9: 30 – 10:00 Roundtable. Presentation and background of participants
    10:00 – 10:30 Introduction to COMPSs
    Motivation
    Setup of tutorial environment
    10:30 – 13:00 PyCOMPSs
    Writing Python applications
    11:00 – 11:30 Coffee break
    Python Hands-on using Jupyter notebooks
    13:00-14:30 Lunch break
    14:30 -15:15 How to debug COMPSs applications
    15:15 -16:30 Python practical session (Bring your Own Code)
    16:30 - Adjourn


    Day 2 (January 29)

    9:30-11:00 COMPSs & Java
    Writing Java applications
    Java Hands-on
    11:00 – 11:30 Coffee break
    11:30-12:30 COMPSs Advanced Features
    Using binaries and MPI code
    COMPSs execution environment
    Integration with OmpSs
    13:30 – 14:30 Lunch break
    14:30-15:30 Cluster Hands-on (MareNostrum)
    15:30 -16:30 Practical session (Bring your Own Code)
    COMPSs Installation & Final Notes

    END of COURSE


     











     

     
    events.prace-ri.eu/event/907/
    Jan 28 9:30 to Jan 29 16:30
    Numerical simulations conducted on current high-performance computing (HPC) systems face an ever growing need for scalability. Larger HPC platforms provide opportunities to push the limitations on size and properties of what can be accurately simulated. Therefore, it is needed to process larger data sets, be it reading input data or writing results. Serial approaches on handling I/O in a parallel application will dominate the performance on massively parallel systems, leaving a lot of computing resources idle during those serial application phases.

    In addition to the need for parallel I/O, input and output data is often processed on different platforms. Heterogeneity of platforms can impose a high level of maintenance, when different data representations are needed. Portable, selfdescribing data formats such as HDF5 and netCDF are examples of already widely used data formats within certain communities.

    This course will start with an introduction to the basics of I/O, including basic I/O-relevant terms, an overview over parallel file systems with a focus on GPFS, and the HPC hardware available at JSC. Different I/O strategies will be presented. The course will introduce the use of the HDF5, the NetCDF and the SIONlib library interfaces as well as MPI-I/O. Optimization potential and best practices are discussed.

    Instructors: Sebastian Lührs, Benedikt Steinbusch, Jülich Supercomputing Centre

    Contact
    For any questions concerning the course please send an e-mail to s.luehrs@fz-juelich.de.
    events.prace-ri.eu/event/961/
    Jan 27 9:00 to Jan 29 16:30

    Thank you to those of you who have already registered for our PTC "Python in HPC @ CSC" training course!

    The course is now fully booked! If you have registered to this course and you are not able to attend, please CANCEL your registration in advance by sending an email to patc at csc.fi

    We have processed a LIMITED WAITING LIST. Please send your request to patc at csc.fi, and we will keep those on the waiting list informed of whether participation will be possible.

    Please note that registration/wait list is at a first come, first served basis. Welcome to the course!

    Description

    Python programming language has become popular in scientific computing due to many benefits it offers for fast code development. Unfortunately, the performance of pure Python programs is often sub-optimal, but fortunately this can be easily remedied. In this course we teach various ways to optimise and parallelise Python programs. Among the topics are performance analysis, efficient use of NumPy arrays, extending Python with more efficient languages (Cython), and parallel computing with  message passing (mpi4py) approach.

    Learning outcome

    After the course participants are able to


    analyse performance of Python program and use NumPy more efficiently
    optimize Python programs with Cython
    utilize external libraries in Python programs
    write simple parallel programs with Python


    Prerequisites

    Participants need some experience in Python programming, but expertise is not required. One should be familiar with


    Python syntax
    Basic builtin datastructures (lists, tuples, dictionaries)
    Control structures (if-else, for, while)
    Writing functions and modules


    Some previous experience on NumPy will be useful, but not strictly required.

    Agenda

    Day 1, Monday 27.1



    Efficient use of NumPy


    Performance analysis



    Day 2, Tuesday 28.1



    Optimisation with Cython


    Interfacing with external libraries



    Day 3, Wednesday 29.1



    Parallel computing with mpi4py






    Lecturers: 

    Jussi Enkovaara (CSC), Martti Louhivuori (CSC)

    Language:   English
    Price:           Free of charge
    events.prace-ri.eu/event/963/
    Jan 27 8:00 to Jan 29 15:00
    Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course convener: Rosa Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department

    Lecturers: 

    Rosa M Badia, Workflows and Distributed Computing Group Manager, Computer Sciences - Workflows and Distributed Computing Department, BSC

    Javier Conejero, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC

    Jorge Ejarque, Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC

    Daniele Lezzi, Senior Researcher, Computer Sciences - Workflows and Distributed Computing Department, BSC

    Objectives: The objective of this course is to give an overview of the COMPSs programming model, which is able to exploit the inherent concurrency of sequential applications and execute them in a transparent manner to the application developer in distributed computing platform. This is achieved by annotating part of the code as tasks, and building at execution a task-dependence graph based on the actual data used consumed/produced by the tasks. The COMPSs runtime is able to schedule the tasks in the computing nodes and take into account facts like data locality and the different nature of the computing nodes in case of heterogeneous platforms. Additionally, recently COMPSs has been enhanced with the possibility of coordinating Web Services as part of the applications. COMPSs supports Java, C/C++ and Python as programming languages.

    Learning Outcomes:  In the course, the COMPSs syntax, programming methodology and an overview of the runtime internals will be given. The attendees will get a first lesson about programming with COMPSs that will enable them to start programming with this framework.

    A hands-on with simple introductory exercises will be also performed. The students who finish this course will be able to develop simple COMPSs applications and to run them both in a local resource and in a distributed platform (initially in a private cloud). The exercises will be delivered in Python and Java. In case of Python, Jupyter notebooks will be used in some of the exercises.

    Level: for trainees with some theoretical and practical knowledge.

    INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course

    ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems

    Prerequisites: Programming skills in Java and Python 

     

    Agenda: 











    Day 1 (January 28)

    9: 30 – 10:00 Roundtable. Presentation and background of participants
    10:00 – 10:30 Introduction to COMPSs
    Motivation
    Setup of tutorial environment
    10:30 – 13:00 PyCOMPSs
    Writing Python applications
    11:00 – 11:30 Coffee break
    Python Hands-on using Jupyter notebooks
    13:00-14:30 Lunch break
    14:30 -15:15 How to debug COMPSs applications
    15:15 -16:30 Python practical session (Bring your Own Code)
    16:30 - Adjourn


    Day 2 (January 29)

    9:30-11:00 COMPSs & Java
    Writing Java applications
    Java Hands-on
    11:00 – 11:30 Coffee break
    11:30-12:30 COMPSs Advanced Features
    Using binaries and MPI code
    COMPSs execution environment
    Integration with OmpSs
    13:30 – 14:30 Lunch break
    14:30-15:30 Cluster Hands-on (MareNostrum)
    15:30 -16:30 Practical session (Bring your Own Code)
    COMPSs Installation & Final Notes

    END of COURSE


     











     

     
    events.prace-ri.eu/event/907/
    Jan 28 9:30 to Jan 29 16:30
    Annotation

    With the petering-out of Moore's law and the end of Dennard's scaling, the pace dictated on the performance increase of High Performance Computing Systems among generations has led to power constrained architectures and systems. In addition power consumption represents a significant cost factor in the overall HPC system economy. For those reasons in recent years, researchers, supercomputing centres and major vendors have developed new tools and methodologies to measure and optimise the energy consumption of large scale high performance system installation. Due to the link between energy consumption, power consumption, and execution time of the application executed by the final user, it is important for tools and methodology to consider all of these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.

    The school will give an introductory course on the fundamental concept of power consumption and energy efficiency in HPC systems. Then it will focus on the mechanisms that today's computing elements and systems provide in terms of monitoring and control of power and energy dissipation. As well as insights on the European Processor Initiative power management design. Finally it will introduce and give a hands-on for a set of tools for reducing the energy consumption in HPC devices.  

    The school is organised into four main sessions, driving the audience from the physical and engineering principles underlying power consumption in supercomputing systems to the practical usage of state-of-the-art tools for monitoring and controlling the energy efficiency of supercomputing machines and workloads. The tools that will be covered are the MSR-SAFE (LLNL), MERIC (IT4I), COUNTDOWN (UNIBO) and Io2s (TUD).

    Level

    Intermediate

    Language

    English

    Purpose of the course (benefits for the attendees)

    By the end of the course, participants will be expected to:


    have a good understanding of the principles underlying power consumption and energy dissipation in high performance computing nodes
    recognise trade-offs and the implications of changing the power consumption in scientific computing systems during the execution of scientific computing applications
    have a clear idea of the state-of-the-art and of practices in controlling the power consumption and energy efficiency of supercomputing nodes and processors
    learn the internals and the usage of a set of user-space run-time libraries for controlling/optimising the power consumption and energy efficiency in x86 computing nodes while executing user's applications
    learn how to use these tools to optimise the energy consumption of your codes.


    About the tutors

    Lubomir Riha is the Head of the Infrastructure Research Lab at IT4Innovations National Supercomputing Center. Previously he was a senior researcher in the Parallel Algorithms Research Lab at IT4Innovations, and a research scientist in the High Performance Computing Lab at George Washington University, ECE Department. He received his PhD and MSc degrees in Electrical Engineering from the Czech Technical University in Prague, the Czech Republic, in 2011, and his Ph.D. degree in Computer Science from Bowie State University, USA. Currently he is a local principal investigator of the H2020 Center of Excellence project POP2. Previously he was an investigator in the FP7 EXA2CT project and the Intel Parallel Computing Center, as well as a local principal investigator of the H2020-FET HPC READEX project. He is also co-principal developer of the ESPRESO finite element library, which includes a parallel sparse solver designed for supercomputers with tens or hundreds of thousands of cores, with support for both GPU and Intel Xeon Phi accelerators. His research interests are optimisation of HPC applications, energy efficient computing, acceleration of scientific and engineering applications using GPU and many-core accelerators, development of scalable linear solvers, parallel rendering on new HPC architectures, and signal and image processing.

    Ondrej Vysocky received his M.Sc. degree in Computer Science from Brno University of Technology, Czech Republic in 2016. His masters thesis focused on parallel I/O optimisation. Currently he is a PhD student at VSB – Technical University of Ostrava, Czech Republic, and he simultaneously works at IT4Innovations in the Infrastructure Research Lab. His research is focused on energy-efficiency in high performance computing. He was also an investigator of the Horizon 2020 READEX project, which deals with energy efficiency of High Performance Computing applications using dynamic tuning. He has since been developing a MERIC library, a tool for energy measurement and hardware parameter tuning during parallel application runs.

    Andrea Bartolini received a Ph.D. degree in Electrical Engineering from the University of Bologna, Italy, in 2011. He is currently an assistant professor in the Department of Electrical, Electronic and Information Engineering (DEI) at the University of Bologna. Previously, he was post-doctoral researcher in the Integrated Systems Laboratory at ETH Zurich. Since 2007 Dr Bartolini has published more than 100 papers in peer-reviewed international journals and conferences with a focus on dynamic resource management for embedded and HPC systems. Since one year Dr Bartolini leads the power management co-design of the European Processor Initiative design.

    Daniele Cesarini graduated in Computer Engineering from the University of Bologna in 2014, where he also earned a PhD degree in Electrical Engineering from the Department of Electrical, Electronic and Information Engineering in 2019. He is currently an HPC software engineer at CINECA, the Italian National Supercomputing Center, where he works in the area of performance optimisation on large-scale scientific applications for the new generation of heterogeneous HPC architectures. His research interests also concern the development of SW-HW co-design strategies as well as algorithms for parallel programming support for energy-efficient HPC systems.

    Robert Schöne works as a post-doc at Technische Universität Dresden, where he also received his PhD. His research includes micro-architectural features of processors, as well as tools and methods for measuring and tuning performance and energy-efficiency of parallel applications. After he received his diploma, he worked in different projects that targeted the measurement and tuning of energy-efficiency of computer systems. Among other things, he described and implemented interfaces that extend performance measurement frameworks for such cases. He was also part of the team that developed the Bull specific power and energy measurement framework HDEEM. After his PhD, he was the scientific manager of the Horizon2020 project READEX, which implemented an automated tool suite for energy efficiency optimisation. Currently, he teaches at the Faculty of Computer Science. Since he received his diploma, he has published more than 30 papers, and organised and co-organised four workshops with a focus on auto-tuning and energy-efficiency.
    events.prace-ri.eu/event/964/
    Jan 29 9:00 to Jan 30 16:00
    Annotation

    With the petering-out of Moore's law and the end of Dennard's scaling, the pace dictated on the performance increase of High Performance Computing Systems among generations has led to power constrained architectures and systems. In addition power consumption represents a significant cost factor in the overall HPC system economy. For those reasons in recent years, researchers, supercomputing centres and major vendors have developed new tools and methodologies to measure and optimise the energy consumption of large scale high performance system installation. Due to the link between energy consumption, power consumption, and execution time of the application executed by the final user, it is important for tools and methodology to consider all of these aspects, empowering the final user and the system administrator with the capability of finding the best configuration given different high level objectives.

    The school will give an introductory course on the fundamental concept of power consumption and energy efficiency in HPC systems. Then it will focus on the mechanisms that today's computing elements and systems provide in terms of monitoring and control of power and energy dissipation. As well as insights on the European Processor Initiative power management design. Finally it will introduce and give a hands-on for a set of tools for reducing the energy consumption in HPC devices.  

    The school is organised into four main sessions, driving the audience from the physical and engineering principles underlying power consumption in supercomputing systems to the practical usage of state-of-the-art tools for monitoring and controlling the energy efficiency of supercomputing machines and workloads. The tools that will be covered are the MSR-SAFE (LLNL), MERIC (IT4I), COUNTDOWN (UNIBO) and Io2s (TUD).

    Level

    Intermediate

    Language

    English

    Purpose of the course (benefits for the attendees)

    By the end of the course, participants will be expected to:


    have a good understanding of the principles underlying power consumption and energy dissipation in high performance computing nodes
    recognise trade-offs and the implications of changing the power consumption in scientific computing systems during the execution of scientific computing applications
    have a clear idea of the state-of-the-art and of practices in controlling the power consumption and energy efficiency of supercomputing nodes and processors
    learn the internals and the usage of a set of user-space run-time libraries for controlling/optimising the power consumption and energy efficiency in x86 computing nodes while executing user's applications
    learn how to use these tools to optimise the energy consumption of your codes.


    About the tutors

    Lubomir Riha is the Head of the Infrastructure Research Lab at IT4Innovations National Supercomputing Center. Previously he was a senior researcher in the Parallel Algorithms Research Lab at IT4Innovations, and a research scientist in the High Performance Computing Lab at George Washington University, ECE Department. He received his PhD and MSc degrees in Electrical Engineering from the Czech Technical University in Prague, the Czech Republic, in 2011, and his Ph.D. degree in Computer Science from Bowie State University, USA. Currently he is a local principal investigator of the H2020 Center of Excellence project POP2. Previously he was an investigator in the FP7 EXA2CT project and the Intel Parallel Computing Center, as well as a local principal investigator of the H2020-FET HPC READEX project. He is also co-principal developer of the ESPRESO finite element library, which includes a parallel sparse solver designed for supercomputers with tens or hundreds of thousands of cores, with support for both GPU and Intel Xeon Phi accelerators. His research interests are optimisation of HPC applications, energy efficient computing, acceleration of scientific and engineering applications using GPU and many-core accelerators, development of scalable linear solvers, parallel rendering on new HPC architectures, and signal and image processing.

    Ondrej Vysocky received his M.Sc. degree in Computer Science from Brno University of Technology, Czech Republic in 2016. His masters thesis focused on parallel I/O optimisation. Currently he is a PhD student at VSB – Technical University of Ostrava, Czech Republic, and he simultaneously works at IT4Innovations in the Infrastructure Research Lab. His research is focused on energy-efficiency in high performance computing. He was also an investigator of the Horizon 2020 READEX project, which deals with energy efficiency of High Performance Computing applications using dynamic tuning. He has since been developing a MERIC library, a tool for energy measurement and hardware parameter tuning during parallel application runs.

    Andrea Bartolini received a Ph.D. degree in Electrical Engineering from the University of Bologna, Italy, in 2011. He is currently an assistant professor in the Department of Electrical, Electronic and Information Engineering (DEI) at the University of Bologna. Previously, he was post-doctoral researcher in the Integrated Systems Laboratory at ETH Zurich. Since 2007 Dr Bartolini has published more than 100 papers in peer-reviewed international journals and conferences with a focus on dynamic resource management for embedded and HPC systems. Since one year Dr Bartolini leads the power management co-design of the European Processor Initiative design.

    Daniele Cesarini graduated in Computer Engineering from the University of Bologna in 2014, where he also earned a PhD degree in Electrical Engineering from the Department of Electrical, Electronic and Information Engineering in 2019. He is currently an HPC software engineer at CINECA, the Italian National Supercomputing Center, where he works in the area of performance optimisation on large-scale scientific applications for the new generation of heterogeneous HPC architectures. His research interests also concern the development of SW-HW co-design strategies as well as algorithms for parallel programming support for energy-efficient HPC systems.

    Robert Schöne works as a post-doc at Technische Universität Dresden, where he also received his PhD. His research includes micro-architectural features of processors, as well as tools and methods for measuring and tuning performance and energy-efficiency of parallel applications. After he received his diploma, he worked in different projects that targeted the measurement and tuning of energy-efficiency of computer systems. Among other things, he described and implemented interfaces that extend performance measurement frameworks for such cases. He was also part of the team that developed the Bull specific power and energy measurement framework HDEEM. After his PhD, he was the scientific manager of the Horizon2020 project READEX, which implemented an automated tool suite for energy efficiency optimisation. Currently, he teaches at the Faculty of Computer Science. Since he received his diploma, he has published more than 30 papers, and organised and co-organised four workshops with a focus on auto-tuning and energy-efficiency.
    events.prace-ri.eu/event/964/
    Jan 29 9:00 to Jan 30 16:00
    Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course conveners:

    Department and Research group: Computer Science - Workflows and Distributed Computing


    Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher
    Anna Queralt, Distributed Object Management research line, Senior researcher

    Course Lecturers:


    Department and Research group: Computer Sciences - Workflows and Distributed Computing

    Alex Barceló, Distributed object Management research line, Researcher
    Yolanda Becerra, Data-driven Scientific Computing research line, Senior researcher
    Adrián Espejo, Data-driven Scientific Computing research line, Junior research engineer
    Daniel Gasull, Distributed object Management research line, Research engineer
    Pol Santamaria, Data-driven Scientific Computing research line, Junior developer
    Anna Queralt, Distributed object Management research line, Senior researcher



    Objectives:

    The objective of this course is to give an overview of BSC storage solutions, Hecuba and dataClay. These two platforms allow to easily store and manipulate distributed data from object-oriented applications, enabling programmers to handle object persistence using the same classes they use in their programs, thus avoiding time consuming transformations between persistent and non-persistent data models. Also, Hecuba and dataClay enable programmers to transparently manage distributed data, without worrying about its location. This is achieved by adding a minimal set of annotations in the classes.

    Both Hecuba and dataClay can work independently or integrated with the COMPSs programming model and runtime to facilitate parallelization of applications that handle persistent data, thus providing a comprehensive mechanism that enables the efficient usage of persistent storage solutions from distributed programming environments.

    Both platforms offer a common interface to the application developer that facilitates using one solution or the other depending on the needs, without changing the application code. Also, both of them have additional features that allow the programmer to take advantage of their particularities.

    Learning Outcomes:  

    In the course, the Hecuba and dataClay syntax, programming methodology and an overview of their internals will be given. Also, an overview of COMPSs at user level will be provided in order to take advantage of the distribution of data with both platforms. The attendees will get a first lesson about programming with the common storage interface that will enable them to start programming with both frameworks.

    A hands-on with simple introductory exercises will be also performed for each platform, with and without COMPSs to distribute the computation. The students who finish this course will be able to develop simple Hecuba and dataClay applications and to run them both in a local resource and in a distributed platform (initially in a private cloud)

    Prerequisites:

    Basic programming skills in Python and Java.

    Previous attendance to PATC course on programming distributed systems with COMPSs is recommended.

     

    Agenda: 











     









    Day 1 (Jan 30)

    Session 1 / 9:30 – 13:00

    9:30-10:00 Round table. Presentation and background of participants
    10:00-11:00 Motivation, introduction and syntax of BSC storage platforms
    11:00-11:30 Coffee break
    11:30-12:15 Hands-on with storage API
    12:15-13:00 COMPSs overview and how to parallelize a sequential application
    13:00-14:30 Lunch break

    Session 2/ 14:30 – 18:00

    14:30-16:00 Hecuba specifics and hands-on
    16:00-16:30 Break
    16:30-18:00 dataClay specifics and hands-on








    END of COURSE


     











     

     
    events.prace-ri.eu/event/909/
    Jan 30 9:30 18:00
    31
     
     

     


    PTC events this month:

    January 2020
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    9
     
    10
     
    11
     
    12
     
    13
     
    14
     
    15
     
    16
     
    17
     
    18
     
    19
     
    20
     
    21
     
    22
     
    23
     
    24
     
    25
     
    26
     
    27
     
    28
     
    29
     
    30
     
    31