• PRACE Training Centres (PTCs)

  • PRACE operates ten PRACE Training Centres (PTCs) and they have established a state-of-the-art curriculum for training in HPC and scientific computing. PTCs carry out and coordinate training and education activities that enable both European academic researchers and European industry to utilise the computational infrastructure available through PRACE and provide top-class education and training opportunities for computational scientists in Europe.
    With approximately 100 training events each year, the ten PRACE Training Centres (PTCs) are based at:

    PTC training events are advertised on the following pages. Registration is free and open to all (pending availability):
    https://events.prace-ri.eu/category/2/

    The following figure depicts the location of the PTC centers throughout Europe.
    PATC, PTC location

    PATC events this month:

    December 2019
    Mon Tue Wed Thu Fri Sat Sun
     
    Online Shared-memory programming with OpenMP

    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Almost all modern computers now have a shared-memory architecture with multiple CPUs connected to the same physical memory, for example multicore laptops or large multi-processor compute servers. This course covers OpenMP, the industry standard for shared-memory programming, which enables serial programs to be parallelised easily using compiler directives. Users of desktop machines can use OpenMP on its own to improve program performance by running on multiple cores; users of parallel supercomputers can use OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of the compute nodes.

    This course will cover an introduction to the fundamental concepts of the shared variables model, followed by the syntax and semantics of OpenMP and how it can be used to parallelise real programs. Hands-on practical programming exercises will be included, with access to HPC provided for the duration of the course.

    Trainer


    Mark Bull

    Mark teaches on EPCC's MSc in High Performance Computing and delivers many of our 'Shared-memory programming with OpenMP' and 'Single node optimisation' courses. He is EPCC's representative on the OpenMP standards body, and has been training computational scientists for over 20 years.

     

    Format

    This online course will run over four sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 13th November with the last session on Wed 4th December.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ
    Logging on to Cirrus is very similar, but substitute login.cirrus.ac.uk as the login address.

    We will provide accounts on the Cirrus system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials are available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/903/
    Nov 13 15:00 to Dec 4 18:00
    Online Shared-memory programming with OpenMP

    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Almost all modern computers now have a shared-memory architecture with multiple CPUs connected to the same physical memory, for example multicore laptops or large multi-processor compute servers. This course covers OpenMP, the industry standard for shared-memory programming, which enables serial programs to be parallelised easily using compiler directives. Users of desktop machines can use OpenMP on its own to improve program performance by running on multiple cores; users of parallel supercomputers can use OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of the compute nodes.

    This course will cover an introduction to the fundamental concepts of the shared variables model, followed by the syntax and semantics of OpenMP and how it can be used to parallelise real programs. Hands-on practical programming exercises will be included, with access to HPC provided for the duration of the course.

    Trainer


    Mark Bull

    Mark teaches on EPCC's MSc in High Performance Computing and delivers many of our 'Shared-memory programming with OpenMP' and 'Single node optimisation' courses. He is EPCC's representative on the OpenMP standards body, and has been training computational scientists for over 20 years.

     

    Format

    This online course will run over four sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 13th November with the last session on Wed 4th December.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ
    Logging on to Cirrus is very similar, but substitute login.cirrus.ac.uk as the login address.

    We will provide accounts on the Cirrus system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials are available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/903/
    Nov 13 15:00 to Dec 4 18:00
    Description:

    The aim of this workshop is to deliver a "traning on the job" school based on a class of selected numerical methods for parallel Computational Fluid Dynamics (CFD). The workshop aims to share the methodologies, numerical methods and their implementation used by the state-of-the-art numerical codes used on High Performance Computing (HPC) clusters. The lectures will present the challenges of numerically solving Partial Differential Equations (PDE) in problems related to fluid-dynamics, using massively parallel clusters. The lectures will give a step-by-step walk through the numerical methods and their parallel aspects, starting from a serial code up to scalablity on clusters, including strategies for parallelization (MPI, OpenMPI, use of Accelerators, plug-in of numerical libraries,....) with hands-on during the lectures. Profiling and optimization techiques on standar and heterogeneous clustes will be shown during the school. Further information will be available later for participants upon confirmations of the speakers.

    Skills: 

    At the end of the course, the student will possess and know how to use the following skills:


    Numerical analysis
    Algorithms for PDE Solution
    Parallel computing (MPI, OpenMP, Accelerators) 
    HPC architecture
    Strategies for massively parallelization of numerical methods
    Numerical Libraries for HPC


    Target audience:

    MSc/PhD students, Post-Docs,  Academic and industrial researchers, software developers  which use / are planning to use / develop a code for CFD 

    Pre-requisites:

    Previous course(s) on parallel computing, numerical analysis and algorithms for p.d.e. solution.

    Admitted students:

    Attendance is free.

    The number of participants is limited to 40 students.
    Applicants will be selected according to their experience, qualifications and scientific interest BASED ON WHAT WRITTEN IN THE REGISTRATION FORM.
    Please use the field "Reason for participation" to specify skills that match the requested pre-requisites for the school.

    DEADLINE FOR REGISTRATION: Nov, Mon 4th 2019.

    THE STUDENTS ADMITTED AND NOT ADMITTED WERE CONTACTED VIA EMAIL ON NOVEMBER, MONDAY 11TH.

    IF YOU SUBMITTED AND DID NOT RECEIVE ANY EMAIL, PLEASE WRITE AT corsi.hpc@cineca.it.  
    events.prace-ri.eu/event/929/
    Dec 2 9:00 to Dec 6 18:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course convener: Arnau Folch

    Course lecturers: 

    José Manuel González Vida (Malaga University), Matteo Cerminara (INGV Pias), Leonardo Mingari (CASE Department, BSC)

    Objectives: This course focuses on modelling two of the highest impact natural hazards, volcanic eruptions and tsunamis. The objective is to give a succinct theoretical overview and then introduce students on the use of different HPC flagship codes included in the Center of Excellence for Exascale in Solid Earth (ChEESE). ASHEE is a volcanic plume and PDC simulator based on a multiphase fluid dynamic model conceived for compressible mixtures composed of gaseous components and solid particle phases. FALL3D is a Eulerian model for the atmospheric transport and ground deposition of volcanic tephra (ash) used in operational volcanic ash dispersal forecasts routinely used to prevent aircraft encounters with volcanic ash clouds and to perform re-routings avoiding contaminated airspace areas. T-HySEA solves the 2D shallow water equations on hydrostatic and dispersive versions. Based on a high-order Finite Volume (FV) discretisation (hydrostatic) with Finite Differences (FD) for the dispersive version on two-way structured nested meshes in spherical coordinates. Together with hands-on sessions, the course will also tackle post-process strategies based on python. In recent years, the Python programming language has become one of the most popular choice for geoscientists. Python is a modern, interpreted, object-oriented, open-source language easy to learn, easy to read, and fast to write. The proliferation of multiple open-source projects with libraries available every day, have facilitated a rapid scientific development in the geoscience community. In addition, the modern data structures and object-oriented nature of the language along with an elegant syntax, enable Earth scientists to write more robust and less buggy code.

    Learning outcomes: Participants will learn and gain experience in installing SE codes and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analyzing and visualizing model results.

    Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience)
    INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course

    Prerequisites:

    At least University degree in progress on Earth Sciences, Computer Sciences or related area.



    Basic knowledge of LINUX


    Knowledge of C, FORTRAN, MPI or openMP is recommended


    Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…)


    Basic knowledge of python



    Agenda:

    Day 1

    Session 1 / 10:00am – 1:30pm (3 h lectures)

    10:00-11:30 Volcanic clouds and plumes: Introduction to the physical problem

    11:30-11:50 Coffee break

    11:50-13:30 Introduction FALL3D

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical)

    14:30-16:00 Introduction to ASHEE

    16:00-16:20 Coffee break

    16:20-18:00 Installation and compilation of FALL3D and ASHEE

     

    Day 2

    Session 1 / 10:00am – 1:30pm (3 h hands-on)

    10:00-11:30 FALL3D hands on I

    11:30-11:50 Coffee break

    11:50-13:30 FALL3D hands on II

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical)

    14:30-16:00 ASHEE hands on I

    16:00-16:20 Coffee break

    16:20-18:00 ASHEE hands on II

     

    Day 3

    Session 1 / 10:00am – 1:30pm (1:30 h lectures, 1:40 h practical)

    10:00-11:30 Introduction to tsunami modeling and the Tsunami-HySEA code

    11:30-11:50 Coffee break

    11:50-13:30 Tsunami-HySEA: from simple to complex simulations

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (3 h hands-on)

    14:30-16:00 Tsunami-HySEA hands on I

    16:00-16:20 Coffee break

    16:20-18:00 Tsunami-HySEA hands on II

     

    Day 4

    Session 1 / 10:00am – 1:30pm (3 h lectures)

    10:00-11:30 A brief introduction to the Python language and object oriented programming

    11:30-11:50 Coffee break

    11:50-13:30 Scientific computing tools and reading files and accessing remote data

    13:30-14:30 Lunch break

     

    A brief introduction to the Python language

    -Installing packages

    Object oriented programming

    -Examples on classes and motivation

    -How to make a class

    -Method Objects

    -Example: manipulating dates and times

    Scientific computing tools

    -Vectors and arrays: basic operations and manipulations

    -References and copies of arrays

    -Vectorization

    -Statistics tools

    -Data Analysis with Pandas

    Reading files and accessing remote data

    -Read and write multi-column data files

    -File formats used in geosciences netCDF, HDF5, HDF-EOS 2, and GRIB 1 and 2

    -Data Access Services: OPeNDAP, NetCDF Subset Service, etc...

    -Example: Reading data from OpenDAP

     

    Session 2 / 2:30pm – 6:00 pm (3h hands-on)

    14:30-16:00 Visualization

    16:00-16:20 Coffee break

    16:20-18:00 Examples and exercises

    Visualization

    -Simple line plots

    -Adjusting the plot

    -Visualization of geographic data

    -3D Scientific data visualization

     

    Examples and exercises

    -FALL3D pre and post-processing tools

    End of Course
    events.prace-ri.eu/event/906/
    Dec 2 10:00 to Dec 5 18:00
    Online Shared-memory programming with OpenMP

    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Almost all modern computers now have a shared-memory architecture with multiple CPUs connected to the same physical memory, for example multicore laptops or large multi-processor compute servers. This course covers OpenMP, the industry standard for shared-memory programming, which enables serial programs to be parallelised easily using compiler directives. Users of desktop machines can use OpenMP on its own to improve program performance by running on multiple cores; users of parallel supercomputers can use OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of the compute nodes.

    This course will cover an introduction to the fundamental concepts of the shared variables model, followed by the syntax and semantics of OpenMP and how it can be used to parallelise real programs. Hands-on practical programming exercises will be included, with access to HPC provided for the duration of the course.

    Trainer


    Mark Bull

    Mark teaches on EPCC's MSc in High Performance Computing and delivers many of our 'Shared-memory programming with OpenMP' and 'Single node optimisation' courses. He is EPCC's representative on the OpenMP standards body, and has been training computational scientists for over 20 years.

     

    Format

    This online course will run over four sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 13th November with the last session on Wed 4th December.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ
    Logging on to Cirrus is very similar, but substitute login.cirrus.ac.uk as the login address.

    We will provide accounts on the Cirrus system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials are available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/903/
    Nov 13 15:00 to Dec 4 18:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course convener: Arnau Folch

    Course lecturers: 

    José Manuel González Vida (Malaga University), Matteo Cerminara (INGV Pias), Leonardo Mingari (CASE Department, BSC)

    Objectives: This course focuses on modelling two of the highest impact natural hazards, volcanic eruptions and tsunamis. The objective is to give a succinct theoretical overview and then introduce students on the use of different HPC flagship codes included in the Center of Excellence for Exascale in Solid Earth (ChEESE). ASHEE is a volcanic plume and PDC simulator based on a multiphase fluid dynamic model conceived for compressible mixtures composed of gaseous components and solid particle phases. FALL3D is a Eulerian model for the atmospheric transport and ground deposition of volcanic tephra (ash) used in operational volcanic ash dispersal forecasts routinely used to prevent aircraft encounters with volcanic ash clouds and to perform re-routings avoiding contaminated airspace areas. T-HySEA solves the 2D shallow water equations on hydrostatic and dispersive versions. Based on a high-order Finite Volume (FV) discretisation (hydrostatic) with Finite Differences (FD) for the dispersive version on two-way structured nested meshes in spherical coordinates. Together with hands-on sessions, the course will also tackle post-process strategies based on python. In recent years, the Python programming language has become one of the most popular choice for geoscientists. Python is a modern, interpreted, object-oriented, open-source language easy to learn, easy to read, and fast to write. The proliferation of multiple open-source projects with libraries available every day, have facilitated a rapid scientific development in the geoscience community. In addition, the modern data structures and object-oriented nature of the language along with an elegant syntax, enable Earth scientists to write more robust and less buggy code.

    Learning outcomes: Participants will learn and gain experience in installing SE codes and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analyzing and visualizing model results.

    Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience)
    INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course

    Prerequisites:

    At least University degree in progress on Earth Sciences, Computer Sciences or related area.



    Basic knowledge of LINUX


    Knowledge of C, FORTRAN, MPI or openMP is recommended


    Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…)


    Basic knowledge of python



    Agenda:

    Day 1

    Session 1 / 10:00am – 1:30pm (3 h lectures)

    10:00-11:30 Volcanic clouds and plumes: Introduction to the physical problem

    11:30-11:50 Coffee break

    11:50-13:30 Introduction FALL3D

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical)

    14:30-16:00 Introduction to ASHEE

    16:00-16:20 Coffee break

    16:20-18:00 Installation and compilation of FALL3D and ASHEE

     

    Day 2

    Session 1 / 10:00am – 1:30pm (3 h hands-on)

    10:00-11:30 FALL3D hands on I

    11:30-11:50 Coffee break

    11:50-13:30 FALL3D hands on II

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical)

    14:30-16:00 ASHEE hands on I

    16:00-16:20 Coffee break

    16:20-18:00 ASHEE hands on II

     

    Day 3

    Session 1 / 10:00am – 1:30pm (1:30 h lectures, 1:40 h practical)

    10:00-11:30 Introduction to tsunami modeling and the Tsunami-HySEA code

    11:30-11:50 Coffee break

    11:50-13:30 Tsunami-HySEA: from simple to complex simulations

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (3 h hands-on)

    14:30-16:00 Tsunami-HySEA hands on I

    16:00-16:20 Coffee break

    16:20-18:00 Tsunami-HySEA hands on II

     

    Day 4

    Session 1 / 10:00am – 1:30pm (3 h lectures)

    10:00-11:30 A brief introduction to the Python language and object oriented programming

    11:30-11:50 Coffee break

    11:50-13:30 Scientific computing tools and reading files and accessing remote data

    13:30-14:30 Lunch break

     

    A brief introduction to the Python language

    -Installing packages

    Object oriented programming

    -Examples on classes and motivation

    -How to make a class

    -Method Objects

    -Example: manipulating dates and times

    Scientific computing tools

    -Vectors and arrays: basic operations and manipulations

    -References and copies of arrays

    -Vectorization

    -Statistics tools

    -Data Analysis with Pandas

    Reading files and accessing remote data

    -Read and write multi-column data files

    -File formats used in geosciences netCDF, HDF5, HDF-EOS 2, and GRIB 1 and 2

    -Data Access Services: OPeNDAP, NetCDF Subset Service, etc...

    -Example: Reading data from OpenDAP

     

    Session 2 / 2:30pm – 6:00 pm (3h hands-on)

    14:30-16:00 Visualization

    16:00-16:20 Coffee break

    16:20-18:00 Examples and exercises

    Visualization

    -Simple line plots

    -Adjusting the plot

    -Visualization of geographic data

    -3D Scientific data visualization

     

    Examples and exercises

    -FALL3D pre and post-processing tools

    End of Course
    events.prace-ri.eu/event/906/
    Dec 2 10:00 to Dec 5 18:00
    Description:

    The aim of this workshop is to deliver a "traning on the job" school based on a class of selected numerical methods for parallel Computational Fluid Dynamics (CFD). The workshop aims to share the methodologies, numerical methods and their implementation used by the state-of-the-art numerical codes used on High Performance Computing (HPC) clusters. The lectures will present the challenges of numerically solving Partial Differential Equations (PDE) in problems related to fluid-dynamics, using massively parallel clusters. The lectures will give a step-by-step walk through the numerical methods and their parallel aspects, starting from a serial code up to scalablity on clusters, including strategies for parallelization (MPI, OpenMPI, use of Accelerators, plug-in of numerical libraries,....) with hands-on during the lectures. Profiling and optimization techiques on standar and heterogeneous clustes will be shown during the school. Further information will be available later for participants upon confirmations of the speakers.

    Skills: 

    At the end of the course, the student will possess and know how to use the following skills:


    Numerical analysis
    Algorithms for PDE Solution
    Parallel computing (MPI, OpenMP, Accelerators) 
    HPC architecture
    Strategies for massively parallelization of numerical methods
    Numerical Libraries for HPC


    Target audience:

    MSc/PhD students, Post-Docs,  Academic and industrial researchers, software developers  which use / are planning to use / develop a code for CFD 

    Pre-requisites:

    Previous course(s) on parallel computing, numerical analysis and algorithms for p.d.e. solution.

    Admitted students:

    Attendance is free.

    The number of participants is limited to 40 students.
    Applicants will be selected according to their experience, qualifications and scientific interest BASED ON WHAT WRITTEN IN THE REGISTRATION FORM.
    Please use the field "Reason for participation" to specify skills that match the requested pre-requisites for the school.

    DEADLINE FOR REGISTRATION: Nov, Mon 4th 2019.

    THE STUDENTS ADMITTED AND NOT ADMITTED WERE CONTACTED VIA EMAIL ON NOVEMBER, MONDAY 11TH.

    IF YOU SUBMITTED AND DID NOT RECEIVE ANY EMAIL, PLEASE WRITE AT corsi.hpc@cineca.it.  
    events.prace-ri.eu/event/929/
    Dec 2 9:00 to Dec 6 18:00

    This course covers performance engineering approaches on the compute node level. Even application developers who are fluent in OpenMP and MPI often lack a good grasp of how much performance could at best be achieved by their code.

    This is because parallelism takes us only half the way to good performance.

    Even worse, slow serial code tends to scale very well, hiding the fact that resources are wasted. This course conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. We introduce the basic architectural features and bottlenecks of modern processors and compute nodes.

    Pipelining, SIMD, superscalarity, caches, memory interfaces, ccNUMA, etc., are covered. A cornerstone of node-level performance analysis is the Roofline model, which is introduced in due detail and applied to various examples from computational science. We also show how simple software tools can be used to acquire knowledge about the system, run code in a reproducible way, and validate hypotheses about resource consumption. Finally, once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of code changes can often be predicted, replacing hope-for-the-best optimizations by a scientific process.

     

    The course is a PRACE training event.


    Introduction

    Our approach to performance engineering
    Basic architecture of multicore systems: threads, cores, caches, sockets, memory
    The important role of system topology


    Tools: topology & affinity in multicore environments

    Overview
    likwid-topology and likwid-pin


    Microbenchmarking for architectural exploration

    Properties of data paths in the memory hierarchy
    Bottlenecks
    OpenMP barrier overhead


    Roofline model: basics

    Model assumptions and construction
    Simple examples
    Limitations of the Roofline model


    Pattern-based performance engineering
    Optimal use of parallel resources

    Single Instruction Multiple Data (SIMD)
    Cache-coherent Non-Uniform Memory Architecture (ccNUMA)
    Simultaneous Multi-Threading (SMT)


    Tools: hardware performance counters

    Why hardware performance counters?
    likwid-perfctr
    Validating performance models


    Roofline case studies

    Dense matrix-vector multiplication
    Sparse matrix-vector multiplication
    Jacobi (stencil) smoother


    Optional: The ECM performance model


    events.prace-ri.eu/event/901/
    Dec 3 9:00 to Dec 4 18:00
    DESCRIPTION (BASICS COURSE)

    Would you like to make 3D visualisations that are visually more attractive than what ParaView or VisIt can provide? Do you need an image for a grant application that needs to look spectacular? Would you like to create a cool animation of your simulation data? Then this course may be for you!

    The goal of this course is to provide you with hands-on knowledge to produce great images and basic animations from 3D scientific data. We will be using the open-source package Blender 2.8 (www.blender.org), which provides good basic functionality, while also being usable for
    advanced usage and general editing of 3D data. It is also a lot of fun to work with (once you get used to its graphical interface).

    Example types of relevant scientific data are 3D cell-based simulations, 3D models from photogrammetry, (isosurfaces of) 3D medical scans, molecular models and earth sciences data. Note that we don't focus on information visualization of abstract data, such as graphs (although you could convert those into a 3D model first and then use them in Blender).

    We like to encourage participants to bring along the data they normally work with, or a sample thereof, and would like to apply the course knowledge to.

    Topics covered:

    - Blender UI and workflow, scene structure
    - Basic importing of data
    - Simple 3D mesh editing with modifiers
    - Basic animation
    - Rendering, lighting and materials


    NOTE FROM THE TRAINERS

    This course was previously given in a single day, but we have now split it into a Basics and Advanced part each a full day. The course described above is the Basics part. The follow-up Advanced course with more in-depth information and a few extra topics will be held in Q1 2020.

     

    IMPORTANT INFORMATION: WAITING LIST

    If the course gets fully booked, no more registrations are accepted through this website. However, you can be included in the waiting list: for that, please send an email to training@surfsara.nl and you'll be informed when a place becomes available.
    events.prace-ri.eu/event/934/
    Dec 3 9:00 17:40
    Online Shared-memory programming with OpenMP

    ARCHER, the UK's national supercomputing service offers training in software development and high-performance computing to scientists and researchers across the UK.

    Details

    Almost all modern computers now have a shared-memory architecture with multiple CPUs connected to the same physical memory, for example multicore laptops or large multi-processor compute servers. This course covers OpenMP, the industry standard for shared-memory programming, which enables serial programs to be parallelised easily using compiler directives. Users of desktop machines can use OpenMP on its own to improve program performance by running on multiple cores; users of parallel supercomputers can use OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of the compute nodes.

    This course will cover an introduction to the fundamental concepts of the shared variables model, followed by the syntax and semantics of OpenMP and how it can be used to parallelise real programs. Hands-on practical programming exercises will be included, with access to HPC provided for the duration of the course.

    Trainer


    Mark Bull

    Mark teaches on EPCC's MSc in High Performance Computing and delivers many of our 'Shared-memory programming with OpenMP' and 'Single node optimisation' courses. He is EPCC's representative on the OpenMP standards body, and has been training computational scientists for over 20 years.

     

    Format

    This online course will run over four sessions on consecutive Wednesday afternoons, each running 14:00 - 16:30 UTC (15:00 - 17:30 CET) with a half-hour break 15:00-15:30 UTC (16:00 - 16:30 CET), starting on Wed 13th November with the last session on Wed 4th December.

    We will be using Blackboard Collaborate for the course, which is very simple to use and entirely browser-based.

    Collaborate usually works without problems with modern browsers, but Firefox or Chrome is recommended. Links to join each of the sessions will be published on the course materials page.

    Attendees will register for the course in the usual way using the registration form.

    Computing requirements

    All attendees will need their own desktop or laptop with the following software installed:


    web browser - e.g. Firefox or Chrome
    pdf viewer - e.g. Firefox, Adobe Acrobat


    and


    ssh client
    - on Mac/Linux then Terminal is fine,
    - on Windows we recommend MobaXterm which provides an SSH client, inbuilt text file editor and X11 graphpics viewer plus a bash shell envioronment. Although this is a bigger install, it is recommended (instead of putty and xming) if you will be accessing HPC machines regularly. There is a 'portable' version of MobaXterm which does not need admin install privilages.
    - on Windows, if you are not using MobaXterm, you can use putty from www.putty.org/
    xming X11 graphics viewer,
    - for Mac www.xquartz.org/,
    - for Windows (if you are not using MobaXterm) sourceforge.net/project.....nload


    We have recorded an ARCHER Screencast: Logging on to ARCHER from Windows using PuTTY
    www.youtube.com/watch?v=oVFQg1qFjKQ
    Logging on to Cirrus is very similar, but substitute login.cirrus.ac.uk as the login address.

    We will provide accounts on the Cirrus system for all attendees who register in advance.

    Course Materials

    All the course materials, including lecture notes and exercise materials are available on the Course Materials page.

    In addition, links to join each of the four online sessions, and recordings of previous sessions will be available on the course materials page.
    events.prace-ri.eu/event/903/
    Nov 13 15:00 to Dec 4 18:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course convener: Arnau Folch

    Course lecturers: 

    José Manuel González Vida (Malaga University), Matteo Cerminara (INGV Pias), Leonardo Mingari (CASE Department, BSC)

    Objectives: This course focuses on modelling two of the highest impact natural hazards, volcanic eruptions and tsunamis. The objective is to give a succinct theoretical overview and then introduce students on the use of different HPC flagship codes included in the Center of Excellence for Exascale in Solid Earth (ChEESE). ASHEE is a volcanic plume and PDC simulator based on a multiphase fluid dynamic model conceived for compressible mixtures composed of gaseous components and solid particle phases. FALL3D is a Eulerian model for the atmospheric transport and ground deposition of volcanic tephra (ash) used in operational volcanic ash dispersal forecasts routinely used to prevent aircraft encounters with volcanic ash clouds and to perform re-routings avoiding contaminated airspace areas. T-HySEA solves the 2D shallow water equations on hydrostatic and dispersive versions. Based on a high-order Finite Volume (FV) discretisation (hydrostatic) with Finite Differences (FD) for the dispersive version on two-way structured nested meshes in spherical coordinates. Together with hands-on sessions, the course will also tackle post-process strategies based on python. In recent years, the Python programming language has become one of the most popular choice for geoscientists. Python is a modern, interpreted, object-oriented, open-source language easy to learn, easy to read, and fast to write. The proliferation of multiple open-source projects with libraries available every day, have facilitated a rapid scientific development in the geoscience community. In addition, the modern data structures and object-oriented nature of the language along with an elegant syntax, enable Earth scientists to write more robust and less buggy code.

    Learning outcomes: Participants will learn and gain experience in installing SE codes and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analyzing and visualizing model results.

    Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience)
    INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course

    Prerequisites:

    At least University degree in progress on Earth Sciences, Computer Sciences or related area.



    Basic knowledge of LINUX


    Knowledge of C, FORTRAN, MPI or openMP is recommended


    Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…)


    Basic knowledge of python



    Agenda:

    Day 1

    Session 1 / 10:00am – 1:30pm (3 h lectures)

    10:00-11:30 Volcanic clouds and plumes: Introduction to the physical problem

    11:30-11:50 Coffee break

    11:50-13:30 Introduction FALL3D

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical)

    14:30-16:00 Introduction to ASHEE

    16:00-16:20 Coffee break

    16:20-18:00 Installation and compilation of FALL3D and ASHEE

     

    Day 2

    Session 1 / 10:00am – 1:30pm (3 h hands-on)

    10:00-11:30 FALL3D hands on I

    11:30-11:50 Coffee break

    11:50-13:30 FALL3D hands on II

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical)

    14:30-16:00 ASHEE hands on I

    16:00-16:20 Coffee break

    16:20-18:00 ASHEE hands on II

     

    Day 3

    Session 1 / 10:00am – 1:30pm (1:30 h lectures, 1:40 h practical)

    10:00-11:30 Introduction to tsunami modeling and the Tsunami-HySEA code

    11:30-11:50 Coffee break

    11:50-13:30 Tsunami-HySEA: from simple to complex simulations

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (3 h hands-on)

    14:30-16:00 Tsunami-HySEA hands on I

    16:00-16:20 Coffee break

    16:20-18:00 Tsunami-HySEA hands on II

     

    Day 4

    Session 1 / 10:00am – 1:30pm (3 h lectures)

    10:00-11:30 A brief introduction to the Python language and object oriented programming

    11:30-11:50 Coffee break

    11:50-13:30 Scientific computing tools and reading files and accessing remote data

    13:30-14:30 Lunch break

     

    A brief introduction to the Python language

    -Installing packages

    Object oriented programming

    -Examples on classes and motivation

    -How to make a class

    -Method Objects

    -Example: manipulating dates and times

    Scientific computing tools

    -Vectors and arrays: basic operations and manipulations

    -References and copies of arrays

    -Vectorization

    -Statistics tools

    -Data Analysis with Pandas

    Reading files and accessing remote data

    -Read and write multi-column data files

    -File formats used in geosciences netCDF, HDF5, HDF-EOS 2, and GRIB 1 and 2

    -Data Access Services: OPeNDAP, NetCDF Subset Service, etc...

    -Example: Reading data from OpenDAP

     

    Session 2 / 2:30pm – 6:00 pm (3h hands-on)

    14:30-16:00 Visualization

    16:00-16:20 Coffee break

    16:20-18:00 Examples and exercises

    Visualization

    -Simple line plots

    -Adjusting the plot

    -Visualization of geographic data

    -3D Scientific data visualization

     

    Examples and exercises

    -FALL3D pre and post-processing tools

    End of Course
    events.prace-ri.eu/event/906/
    Dec 2 10:00 to Dec 5 18:00
    Description:

    The aim of this workshop is to deliver a "traning on the job" school based on a class of selected numerical methods for parallel Computational Fluid Dynamics (CFD). The workshop aims to share the methodologies, numerical methods and their implementation used by the state-of-the-art numerical codes used on High Performance Computing (HPC) clusters. The lectures will present the challenges of numerically solving Partial Differential Equations (PDE) in problems related to fluid-dynamics, using massively parallel clusters. The lectures will give a step-by-step walk through the numerical methods and their parallel aspects, starting from a serial code up to scalablity on clusters, including strategies for parallelization (MPI, OpenMPI, use of Accelerators, plug-in of numerical libraries,....) with hands-on during the lectures. Profiling and optimization techiques on standar and heterogeneous clustes will be shown during the school. Further information will be available later for participants upon confirmations of the speakers.

    Skills: 

    At the end of the course, the student will possess and know how to use the following skills:


    Numerical analysis
    Algorithms for PDE Solution
    Parallel computing (MPI, OpenMP, Accelerators) 
    HPC architecture
    Strategies for massively parallelization of numerical methods
    Numerical Libraries for HPC


    Target audience:

    MSc/PhD students, Post-Docs,  Academic and industrial researchers, software developers  which use / are planning to use / develop a code for CFD 

    Pre-requisites:

    Previous course(s) on parallel computing, numerical analysis and algorithms for p.d.e. solution.

    Admitted students:

    Attendance is free.

    The number of participants is limited to 40 students.
    Applicants will be selected according to their experience, qualifications and scientific interest BASED ON WHAT WRITTEN IN THE REGISTRATION FORM.
    Please use the field "Reason for participation" to specify skills that match the requested pre-requisites for the school.

    DEADLINE FOR REGISTRATION: Nov, Mon 4th 2019.

    THE STUDENTS ADMITTED AND NOT ADMITTED WERE CONTACTED VIA EMAIL ON NOVEMBER, MONDAY 11TH.

    IF YOU SUBMITTED AND DID NOT RECEIVE ANY EMAIL, PLEASE WRITE AT corsi.hpc@cineca.it.  
    events.prace-ri.eu/event/929/
    Dec 2 9:00 to Dec 6 18:00

    This course covers performance engineering approaches on the compute node level. Even application developers who are fluent in OpenMP and MPI often lack a good grasp of how much performance could at best be achieved by their code.

    This is because parallelism takes us only half the way to good performance.

    Even worse, slow serial code tends to scale very well, hiding the fact that resources are wasted. This course conveys the required knowledge to develop a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. We introduce the basic architectural features and bottlenecks of modern processors and compute nodes.

    Pipelining, SIMD, superscalarity, caches, memory interfaces, ccNUMA, etc., are covered. A cornerstone of node-level performance analysis is the Roofline model, which is introduced in due detail and applied to various examples from computational science. We also show how simple software tools can be used to acquire knowledge about the system, run code in a reproducible way, and validate hypotheses about resource consumption. Finally, once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of code changes can often be predicted, replacing hope-for-the-best optimizations by a scientific process.

     

    The course is a PRACE training event.


    Introduction

    Our approach to performance engineering
    Basic architecture of multicore systems: threads, cores, caches, sockets, memory
    The important role of system topology


    Tools: topology & affinity in multicore environments

    Overview
    likwid-topology and likwid-pin


    Microbenchmarking for architectural exploration

    Properties of data paths in the memory hierarchy
    Bottlenecks
    OpenMP barrier overhead


    Roofline model: basics

    Model assumptions and construction
    Simple examples
    Limitations of the Roofline model


    Pattern-based performance engineering
    Optimal use of parallel resources

    Single Instruction Multiple Data (SIMD)
    Cache-coherent Non-Uniform Memory Architecture (ccNUMA)
    Simultaneous Multi-Threading (SMT)


    Tools: hardware performance counters

    Why hardware performance counters?
    likwid-perfctr
    Validating performance models


    Roofline case studies

    Dense matrix-vector multiplication
    Sparse matrix-vector multiplication
    Jacobi (stencil) smoother


    Optional: The ECM performance model


    events.prace-ri.eu/event/901/
    Dec 3 9:00 to Dec 4 18:00
    The registration to this course is now open. Please, bring your own laptop. All the PATC courses at BSC are free of charge.

    Course convener: Arnau Folch

    Course lecturers: 

    José Manuel González Vida (Malaga University), Matteo Cerminara (INGV Pias), Leonardo Mingari (CASE Department, BSC)

    Objectives: This course focuses on modelling two of the highest impact natural hazards, volcanic eruptions and tsunamis. The objective is to give a succinct theoretical overview and then introduce students on the use of different HPC flagship codes included in the Center of Excellence for Exascale in Solid Earth (ChEESE). ASHEE is a volcanic plume and PDC simulator based on a multiphase fluid dynamic model conceived for compressible mixtures composed of gaseous components and solid particle phases. FALL3D is a Eulerian model for the atmospheric transport and ground deposition of volcanic tephra (ash) used in operational volcanic ash dispersal forecasts routinely used to prevent aircraft encounters with volcanic ash clouds and to perform re-routings avoiding contaminated airspace areas. T-HySEA solves the 2D shallow water equations on hydrostatic and dispersive versions. Based on a high-order Finite Volume (FV) discretisation (hydrostatic) with Finite Differences (FD) for the dispersive version on two-way structured nested meshes in spherical coordinates. Together with hands-on sessions, the course will also tackle post-process strategies based on python. In recent years, the Python programming language has become one of the most popular choice for geoscientists. Python is a modern, interpreted, object-oriented, open-source language easy to learn, easy to read, and fast to write. The proliferation of multiple open-source projects with libraries available every day, have facilitated a rapid scientific development in the geoscience community. In addition, the modern data structures and object-oriented nature of the language along with an elegant syntax, enable Earth scientists to write more robust and less buggy code.

    Learning outcomes: Participants will learn and gain experience in installing SE codes and related utilities and libraries, running numerical simulations, monitoring the execution of supercomputing jobs, analyzing and visualizing model results.

    Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience)
    INTERMEDIATE: for trainees with some theoretical and practical knowledge; those who finished the beginners course

    Prerequisites:

    At least University degree in progress on Earth Sciences, Computer Sciences or related area.



    Basic knowledge of LINUX


    Knowledge of C, FORTRAN, MPI or openMP is recommended


    Knowledge of Earth Sciences data formats is recommended (grib, netcdf, hdf,…)


    Basic knowledge of python



    Agenda:

    Day 1

    Session 1 / 10:00am – 1:30pm (3 h lectures)

    10:00-11:30 Volcanic clouds and plumes: Introduction to the physical problem

    11:30-11:50 Coffee break

    11:50-13:30 Introduction FALL3D

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical)

    14:30-16:00 Introduction to ASHEE

    16:00-16:20 Coffee break

    16:20-18:00 Installation and compilation of FALL3D and ASHEE

     

    Day 2

    Session 1 / 10:00am – 1:30pm (3 h hands-on)

    10:00-11:30 FALL3D hands on I

    11:30-11:50 Coffee break

    11:50-13:30 FALL3D hands on II

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (1:30 h lectures, 2 h practical)

    14:30-16:00 ASHEE hands on I

    16:00-16:20 Coffee break

    16:20-18:00 ASHEE hands on II

     

    Day 3

    Session 1 / 10:00am – 1:30pm (1:30 h lectures, 1:40 h practical)

    10:00-11:30 Introduction to tsunami modeling and the Tsunami-HySEA code

    11:30-11:50 Coffee break

    11:50-13:30 Tsunami-HySEA: from simple to complex simulations

    13:30-14:30 Lunch break

     

    Session 2 / 2:30pm – 6:00 pm (3 h hands-on)

    14:30-16:00 Tsunami-HySEA hands on I

    16:00-16:20 Coffee break

    16:20-18:00 Tsunami-HySEA hands on II

     

    Day 4

    Session 1 / 10:00am – 1:30pm (3 h lectures)

    10:00-11:30 A brief introduction to the Python language and object oriented programming

    11:30-11:50 Coffee break

    11:50-13:30 Scientific computing tools and reading files and accessing remote data

    13:30-14:30 Lunch break

     

    A brief introduction to the Python language

    -Installing packages

    Object oriented programming

    -Examples on classes and motivation

    -How to make a class

    -Method Objects

    -Example: manipulating dates and times

    Scientific computing tools

    -Vectors and arrays: basic operations and manipulations

    -References and copies of arrays

    -Vectorization

    -Statistics tools

    -Data Analysis with Pandas

    Reading files and accessing remote data

    -Read and write multi-column data files

    -File formats used in geosciences netCDF, HDF5, HDF-EOS 2, and GRIB 1 and 2

    -Data Access Services: OPeNDAP, NetCDF Subset Service, etc...

    -Example: Reading data from OpenDAP

     

    Session 2 / 2:30pm – 6:00 pm (3h hands-on)

    14:30-16:00 Visualization

    16:00-16:20 Coffee break

    16:20-18:00 Examples and exercises

    Visualization

    -Simple line plots

    -Adjusting the plot

    -Visualization of geographic data

    -3D Scientific data visualization

     

    Examples and exercises

    -FALL3D pre and post-processing tools

    End of Course
    events.prace-ri.eu/event/906/
    Dec 2 10:00 to Dec 5 18:00
    Description:

    The aim of this workshop is to deliver a "traning on the job" school based on a class of selected numerical methods for parallel Computational Fluid Dynamics (CFD). The workshop aims to share the methodologies, numerical methods and their implementation used by the state-of-the-art numerical codes used on High Performance Computing (HPC) clusters. The lectures will present the challenges of numerically solving Partial Differential Equations (PDE) in problems related to fluid-dynamics, using massively parallel clusters. The lectures will give a step-by-step walk through the numerical methods and their parallel aspects, starting from a serial code up to scalablity on clusters, including strategies for parallelization (MPI, OpenMPI, use of Accelerators, plug-in of numerical libraries,....) with hands-on during the lectures. Profiling and optimization techiques on standar and heterogeneous clustes will be shown during the school. Further information will be available later for participants upon confirmations of the speakers.

    Skills: 

    At the end of the course, the student will possess and know how to use the following skills:


    Numerical analysis
    Algorithms for PDE Solution
    Parallel computing (MPI, OpenMP, Accelerators) 
    HPC architecture
    Strategies for massively parallelization of numerical methods
    Numerical Libraries for HPC


    Target audience:

    MSc/PhD students, Post-Docs,  Academic and industrial researchers, software developers  which use / are planning to use / develop a code for CFD 

    Pre-requisites:

    Previous course(s) on parallel computing, numerical analysis and algorithms for p.d.e. solution.

    Admitted students:

    Attendance is free.

    The number of participants is limited to 40 students.
    Applicants will be selected according to their experience, qualifications and scientific interest BASED ON WHAT WRITTEN IN THE REGISTRATION FORM.
    Please use the field "Reason for participation" to specify skills that match the requested pre-requisites for the school.

    DEADLINE FOR REGISTRATION: Nov, Mon 4th 2019.

    THE STUDENTS ADMITTED AND NOT ADMITTED WERE CONTACTED VIA EMAIL ON NOVEMBER, MONDAY 11TH.

    IF YOU SUBMITTED AND DID NOT RECEIVE ANY EMAIL, PLEASE WRITE AT corsi.hpc@cineca.it.  
    events.prace-ri.eu/event/929/
    Dec 2 9:00 to Dec 6 18:00
    Description:

    The aim of this workshop is to deliver a "traning on the job" school based on a class of selected numerical methods for parallel Computational Fluid Dynamics (CFD). The workshop aims to share the methodologies, numerical methods and their implementation used by the state-of-the-art numerical codes used on High Performance Computing (HPC) clusters. The lectures will present the challenges of numerically solving Partial Differential Equations (PDE) in problems related to fluid-dynamics, using massively parallel clusters. The lectures will give a step-by-step walk through the numerical methods and their parallel aspects, starting from a serial code up to scalablity on clusters, including strategies for parallelization (MPI, OpenMPI, use of Accelerators, plug-in of numerical libraries,....) with hands-on during the lectures. Profiling and optimization techiques on standar and heterogeneous clustes will be shown during the school. Further information will be available later for participants upon confirmations of the speakers.

    Skills: 

    At the end of the course, the student will possess and know how to use the following skills:


    Numerical analysis
    Algorithms for PDE Solution
    Parallel computing (MPI, OpenMP, Accelerators) 
    HPC architecture
    Strategies for massively parallelization of numerical methods
    Numerical Libraries for HPC


    Target audience:

    MSc/PhD students, Post-Docs,  Academic and industrial researchers, software developers  which use / are planning to use / develop a code for CFD 

    Pre-requisites:

    Previous course(s) on parallel computing, numerical analysis and algorithms for p.d.e. solution.

    Admitted students:

    Attendance is free.

    The number of participants is limited to 40 students.
    Applicants will be selected according to their experience, qualifications and scientific interest BASED ON WHAT WRITTEN IN THE REGISTRATION FORM.
    Please use the field "Reason for participation" to specify skills that match the requested pre-requisites for the school.

    DEADLINE FOR REGISTRATION: Nov, Mon 4th 2019.

    THE STUDENTS ADMITTED AND NOT ADMITTED WERE CONTACTED VIA EMAIL ON NOVEMBER, MONDAY 11TH.

    IF YOU SUBMITTED AND DID NOT RECEIVE ANY EMAIL, PLEASE WRITE AT corsi.hpc@cineca.it.  
    events.prace-ri.eu/event/929/
    Dec 2 9:00 to Dec 6 18:00
    7
     
    8
     
    This course focuses on the development and execution of bioinformatics pipelines and on their optimization with regards to computing time and disk space. In an era where the data produced per-analysis is in the order of terabytes, simple serial bioinformatic pipelines are no longer feasible. Hence the need for scalable, high-performance parallelization and analysis tools which can easily cope with large-scale datasets. To this end we will study the common performance bottlenecks emerging from everyday bioinformatic pipelines and see how to strike down the execution times for effective data analysis on current and future supercomputers.
    As a case study, two different bioinformatics pipelines (whole-exome and transcriptome analysis) will be presented and re-implemented on the supercomputers of Cineca thanks to ad-hoc hands-on sessions aimed at applying the concepts explained in the course.

    Skills:
    By the end of the course each student should be able to:

    Manage the transfer of big data files and/or large number of files from the local computer to the Cineca platforms and vice versa
    Prepare the environment to analyse big amount of biological data on a supercomputer
    Run single parallel bioinformatic programs on a supercomputer
    Combine bioinformatics applications into pipelines on a supercomputer


    Target audience:
    Biologists, bioinformaticians and computer scientists interested in approaching large-scale NGS-data analysis for the first time.

    Pre-requisites:
    Basic knowledge of python and shell command line. A very basic knowledge of biology is recommended but not required.

    Grant
    The course is FREE of charge.
    The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (except Rome area). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures.

    Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date.
    events.prace-ri.eu/event/939/
    Dec 9 9:00 to Dec 11 17:00
    HPC Carpentries Course page

    All the information on this course can be found on the HPC Carpentry page for this workshop at: archer-cse.github.io/2.....hell/

    Details

    This course introduces accessing remote advanced computing facilities via the command line and High Performance Computing (HPC). After completing this course, participants will:


    Understand motivations for using HPC in research
    Understand how HPC systems are put together to achieve performance and how they differ from desktops/laptops
    Know how to connect to remote HPC systems and transfer data
    Be able to use the Bash command line on remote systems
    Know how to use a scheduler to work on a shared system
    Be able to use software modules to access different HPC software
    Be able to work effectively on a remote shared resource


    Full details including course timetable available soon.

    This course is being run with support from the ARCHER National Supercomputing Service and PRACE.

    This course is free to all.

    Pre-requisites

    There are no prerequisites for this workshop.

    Requirements

    Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on. They should have a few specific software packages installed as detailed at the ARCHER Software setup page. They are also required to abide by ARCHER Training Code of Conduct.

    Accessibility

    We are committed to making this workshop accessible to everybody. The workshop organisers have checked that:


    The room is wheelchair / scooter accessible.
    Accessible restrooms are available.
    Materials will be provided in advance of the workshop and large-print handouts are available if needed by notifying the organizers in advance. If we can help making learning easier for you (e.g. sign-language interpreters, lactation facilities) please get in touch and we will attempt to provide them.


    Course Materials

    Course page including slides and exercise material.

    Trainer


    Andy Turner

    Andy Turner leads the application support teams for the UK national HPC services ARCHER and Cirrus. He is also heavily involved in advanced computing training at EPCC. Andy has a particular interest in enabling new user communities to make use of HPC and the use of novel user engagement to improve the HPC user experience. He has been involved the HPC Carpentry initiative for the past two years.
    events.prace-ri.eu/event/924/
    Dec 9 11:00 to Dec 10 17:00
    HPC Carpentries Course page

    All the information on this course can be found on the HPC Carpentry page for this workshop at: archer-cse.github.io/2.....hell/

    Details

    This course introduces accessing remote advanced computing facilities via the command line and High Performance Computing (HPC). After completing this course, participants will:


    Understand motivations for using HPC in research
    Understand how HPC systems are put together to achieve performance and how they differ from desktops/laptops
    Know how to connect to remote HPC systems and transfer data
    Be able to use the Bash command line on remote systems
    Know how to use a scheduler to work on a shared system
    Be able to use software modules to access different HPC software
    Be able to work effectively on a remote shared resource


    Full details including course timetable available soon.

    This course is being run with support from the ARCHER National Supercomputing Service and PRACE.

    This course is free to all.

    Pre-requisites

    There are no prerequisites for this workshop.

    Requirements

    Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on. They should have a few specific software packages installed as detailed at the ARCHER Software setup page. They are also required to abide by ARCHER Training Code of Conduct.

    Accessibility

    We are committed to making this workshop accessible to everybody. The workshop organisers have checked that:


    The room is wheelchair / scooter accessible.
    Accessible restrooms are available.
    Materials will be provided in advance of the workshop and large-print handouts are available if needed by notifying the organizers in advance. If we can help making learning easier for you (e.g. sign-language interpreters, lactation facilities) please get in touch and we will attempt to provide them.


    Course Materials

    Course page including slides and exercise material.

    Trainer


    Andy Turner

    Andy Turner leads the application support teams for the UK national HPC services ARCHER and Cirrus. He is also heavily involved in advanced computing training at EPCC. Andy has a particular interest in enabling new user communities to make use of HPC and the use of novel user engagement to improve the HPC user experience. He has been involved the HPC Carpentry initiative for the past two years.
    events.prace-ri.eu/event/924/
    Dec 9 11:00 to Dec 10 17:00
    This course focuses on the development and execution of bioinformatics pipelines and on their optimization with regards to computing time and disk space. In an era where the data produced per-analysis is in the order of terabytes, simple serial bioinformatic pipelines are no longer feasible. Hence the need for scalable, high-performance parallelization and analysis tools which can easily cope with large-scale datasets. To this end we will study the common performance bottlenecks emerging from everyday bioinformatic pipelines and see how to strike down the execution times for effective data analysis on current and future supercomputers.
    As a case study, two different bioinformatics pipelines (whole-exome and transcriptome analysis) will be presented and re-implemented on the supercomputers of Cineca thanks to ad-hoc hands-on sessions aimed at applying the concepts explained in the course.

    Skills:
    By the end of the course each student should be able to:

    Manage the transfer of big data files and/or large number of files from the local computer to the Cineca platforms and vice versa
    Prepare the environment to analyse big amount of biological data on a supercomputer
    Run single parallel bioinformatic programs on a supercomputer
    Combine bioinformatics applications into pipelines on a supercomputer


    Target audience:
    Biologists, bioinformaticians and computer scientists interested in approaching large-scale NGS-data analysis for the first time.

    Pre-requisites:
    Basic knowledge of python and shell command line. A very basic knowledge of biology is recommended but not required.

    Grant
    The course is FREE of charge.
    The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (except Rome area). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures.

    Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date.
    events.prace-ri.eu/event/939/
    Dec 9 9:00 to Dec 11 17:00
    This course focuses on the development and execution of bioinformatics pipelines and on their optimization with regards to computing time and disk space. In an era where the data produced per-analysis is in the order of terabytes, simple serial bioinformatic pipelines are no longer feasible. Hence the need for scalable, high-performance parallelization and analysis tools which can easily cope with large-scale datasets. To this end we will study the common performance bottlenecks emerging from everyday bioinformatic pipelines and see how to strike down the execution times for effective data analysis on current and future supercomputers.
    As a case study, two different bioinformatics pipelines (whole-exome and transcriptome analysis) will be presented and re-implemented on the supercomputers of Cineca thanks to ad-hoc hands-on sessions aimed at applying the concepts explained in the course.

    Skills:
    By the end of the course each student should be able to:

    Manage the transfer of big data files and/or large number of files from the local computer to the Cineca platforms and vice versa
    Prepare the environment to analyse big amount of biological data on a supercomputer
    Run single parallel bioinformatic programs on a supercomputer
    Combine bioinformatics applications into pipelines on a supercomputer


    Target audience:
    Biologists, bioinformaticians and computer scientists interested in approaching large-scale NGS-data analysis for the first time.

    Pre-requisites:
    Basic knowledge of python and shell command line. A very basic knowledge of biology is recommended but not required.

    Grant
    The course is FREE of charge.
    The lunch for the three days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy (except Rome area). Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures.

    Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date.
    events.prace-ri.eu/event/939/
    Dec 9 9:00 to Dec 11 17:00
    Efficient Use of HPC Systems

    11 - 12 December 2019

    Description

    The purpose of this course  is to present to existing and potential users of PRACE HPC systems an introduction on how to efficiently use these systems,their typical tools, software environment, compilers, libraries, MPI/OpenMP, batch system, etc.

    The trainees will learn what the HPC systems offer, how they work and how to apply for access to these infrastructures - both PRACE Tier-1 and Tier-0.

    Prerequisites

    The course addresses to any potential user of an HPC infrastructure.  Background in modules, compilers, MPI/OpenMP/Cuda, batch systems, running time consuming applications is desirable.

    Bring your own laptop in order to be able to participate in the training hands on. Hands on work will be done in pairs so if you don’t have a laptop you might work with a colleague.

    Course language is English.

    Registration

    The maximum number of participants is 25.

    Registrations will be evaluated on a first-come, first-served basis. GRNET is responsible for the selection of the participants on the basis of the training requirements and the technical skills of the candidates. GRNET will also seek to guarantee the maximum possible geographical coverage with the participation of candidates from many countries.

    Venue

    GRNET headquarters

    Address: 2nd  Floor, 7, Kifisias Av. GR 115 23 Athens

    Information on how to reach GRNET headquarters ia available on GRNET website: grnet.gr/en/contact-us/  

    Accommodation options near GRNET can be found at: grnet.gr/wp-content/up.....n.pdf

    ARIS - System Information

    ARIS is the name of the Greek supercomputer, deployed and operated by GRNET (Greek Research and Technology Network) in Athens. ARIS consists of 532 computational nodes seperated in four “islands” as listed here:



    426 thin nodes: Regular compute nodes without accelerator.


    44 gpu nodes: “2 x NVIDIA Tesla k40m” accelerated nodes.


    18 phi nodes: “2 x INTEL Xeon Phi 7120p” accelerated nodes.


    44 fat nodes: Fat compute nodes have larger number of cores and memory per core than a thin node.


    1 ml node: Machine Learning node consisting of 1 server, containing 2 Intel E5-2698v4 processors, 512 GB of central memory and 8 NVIDIA V100 GPU card



    All the nodes are connected via Infiniband network and share 2PB GPFS storage.The infrastructure also has an IBM TS3500 library of maximum storage capacity of about 6 PB. Access to the system is provided by two login nodes.

    About Tutors

    Dr. Dellis (Male) holds a B.Sc. in Chemistry (1990) and PhD in Computational Chemistry (1995) from the National and Kapodistrian University of Athens, Greece. He has extensive HPC and grid computing experience. He was using HPC systems in computational chemistry research projects on fz-juelich machines (2003-2005). He received an HPC-Europa grant on BSC (2009). In EGEE/EGI projects he acted as application support and VO software manager for SEE VO, grid sites administrator (HG-02, GR-06), NGI_GRNET support staff (2008-2014). In PRACE 1IP/2IP/3IP/4IP/5IP he was involved in benchmarking tasks either as group member or as BCO (2010-2018). Currently he is leader of the HPC team at GRNET S.A.

    Kyriakos Ginis received his Diploma in Electrical and Computer Engineering in 2003 from the National Technical University of Athens, Greece. Between 2004 and 2014 he participated in the european projects EGEE I/II/III and EGI as a grid site administrator of the HellasGrid sites HG-01-GRNET, HG-06-EKT and HG-08-Okeanos. Since 2014 he works at GRNET as a system administrator of the ARIS HPC system, primarily responsible for hardware, operating systems and file/storage systems. He continues maintaining the HellasGrid sites HG-06 and HG-08, and supports other GRNET services such as the unique and persistent identifiers (PID)
    service, also part of the EUDAT project.

    Nikoloutsakos Nikolaos holds a diploma of Engineering in Computer Engineering and Informatics (2014) from the University of Patras, Greece. From 2015 he works as software engineer at GRNET S.A. where he is part of the user application support team for the ARIS HPC system. He has been involved in major national and European projects, such as PRACE and EUDAT. His main research interests include parallel programming models, co-processor programming using GPUs and Intel Xeon Phis.

    About GRNET

    GRNET – National Infrastructures for Research and Technology, is the national network, cloud computing and IT e-Infrastructure and services provider. It supports hundreds of thousands of users in the key areas of Research, Education, Health and Culture.

    GRNET provides an integrated environment of cutting-edge technologies integrating a country-wide dark fiber network, data centers, a high performance computing system and Internet, cloud computing, high-performance computing, authentication and authorization services, security services, as well as audio, voice and video services.

    GRNET scientific and advisory duties address the areas of information technology, digital technologies, communications, e-government, new technologies and their applications, research and development, education, as well as the promotion of Digital Transformation.

    Through international partnerships and the coordination of EC co-funded projects, it creates opportunities for know-how development and exploitation, and contributes, in a decisive manner, to the development of Research and Science in Greece and abroad.

    National Infrastructures for Research and Technology – Networking Research and Education

    www.grnet.gr, hpc.grnet.gr
    events.prace-ri.eu/event/945/
    Dec 11 9:00 to Dec 12 16:00
    The Flemish Supercomputing Center (VSC) organizes an MPI course at Heverlee (Belgium). This course is offered with PRACE support by SURFsara (The Netherlands) and it is based on a PATC course developed by Dr. Rolf Rabenseifner (HLRS, Stuttgart).

    For registration and more information, please visit the following page:

    www.vscentrum.be/mpi
    events.prace-ri.eu/event/966/
    Dec 11 9:00 to Dec 12 17:00
    Efficient Use of HPC Systems

    11 - 12 December 2019

    Description

    The purpose of this course  is to present to existing and potential users of PRACE HPC systems an introduction on how to efficiently use these systems,their typical tools, software environment, compilers, libraries, MPI/OpenMP, batch system, etc.

    The trainees will learn what the HPC systems offer, how they work and how to apply for access to these infrastructures - both PRACE Tier-1 and Tier-0.

    Prerequisites

    The course addresses to any potential user of an HPC infrastructure.  Background in modules, compilers, MPI/OpenMP/Cuda, batch systems, running time consuming applications is desirable.

    Bring your own laptop in order to be able to participate in the training hands on. Hands on work will be done in pairs so if you don’t have a laptop you might work with a colleague.

    Course language is English.

    Registration

    The maximum number of participants is 25.

    Registrations will be evaluated on a first-come, first-served basis. GRNET is responsible for the selection of the participants on the basis of the training requirements and the technical skills of the candidates. GRNET will also seek to guarantee the maximum possible geographical coverage with the participation of candidates from many countries.

    Venue

    GRNET headquarters

    Address: 2nd  Floor, 7, Kifisias Av. GR 115 23 Athens

    Information on how to reach GRNET headquarters ia available on GRNET website: grnet.gr/en/contact-us/  

    Accommodation options near GRNET can be found at: grnet.gr/wp-content/up.....n.pdf

    ARIS - System Information

    ARIS is the name of the Greek supercomputer, deployed and operated by GRNET (Greek Research and Technology Network) in Athens. ARIS consists of 532 computational nodes seperated in four “islands” as listed here:



    426 thin nodes: Regular compute nodes without accelerator.


    44 gpu nodes: “2 x NVIDIA Tesla k40m” accelerated nodes.


    18 phi nodes: “2 x INTEL Xeon Phi 7120p” accelerated nodes.


    44 fat nodes: Fat compute nodes have larger number of cores and memory per core than a thin node.


    1 ml node: Machine Learning node consisting of 1 server, containing 2 Intel E5-2698v4 processors, 512 GB of central memory and 8 NVIDIA V100 GPU card



    All the nodes are connected via Infiniband network and share 2PB GPFS storage.The infrastructure also has an IBM TS3500 library of maximum storage capacity of about 6 PB. Access to the system is provided by two login nodes.

    About Tutors

    Dr. Dellis (Male) holds a B.Sc. in Chemistry (1990) and PhD in Computational Chemistry (1995) from the National and Kapodistrian University of Athens, Greece. He has extensive HPC and grid computing experience. He was using HPC systems in computational chemistry research projects on fz-juelich machines (2003-2005). He received an HPC-Europa grant on BSC (2009). In EGEE/EGI projects he acted as application support and VO software manager for SEE VO, grid sites administrator (HG-02, GR-06), NGI_GRNET support staff (2008-2014). In PRACE 1IP/2IP/3IP/4IP/5IP he was involved in benchmarking tasks either as group member or as BCO (2010-2018). Currently he is leader of the HPC team at GRNET S.A.

    Kyriakos Ginis received his Diploma in Electrical and Computer Engineering in 2003 from the National Technical University of Athens, Greece. Between 2004 and 2014 he participated in the european projects EGEE I/II/III and EGI as a grid site administrator of the HellasGrid sites HG-01-GRNET, HG-06-EKT and HG-08-Okeanos. Since 2014 he works at GRNET as a system administrator of the ARIS HPC system, primarily responsible for hardware, operating systems and file/storage systems. He continues maintaining the HellasGrid sites HG-06 and HG-08, and supports other GRNET services such as the unique and persistent identifiers (PID)
    service, also part of the EUDAT project.

    Nikoloutsakos Nikolaos holds a diploma of Engineering in Computer Engineering and Informatics (2014) from the University of Patras, Greece. From 2015 he works as software engineer at GRNET S.A. where he is part of the user application support team for the ARIS HPC system. He has been involved in major national and European projects, such as PRACE and EUDAT. His main research interests include parallel programming models, co-processor programming using GPUs and Intel Xeon Phis.

    About GRNET

    GRNET – National Infrastructures for Research and Technology, is the national network, cloud computing and IT e-Infrastructure and services provider. It supports hundreds of thousands of users in the key areas of Research, Education, Health and Culture.

    GRNET provides an integrated environment of cutting-edge technologies integrating a country-wide dark fiber network, data centers, a high performance computing system and Internet, cloud computing, high-performance computing, authentication and authorization services, security services, as well as audio, voice and video services.

    GRNET scientific and advisory duties address the areas of information technology, digital technologies, communications, e-government, new technologies and their applications, research and development, education, as well as the promotion of Digital Transformation.

    Through international partnerships and the coordination of EC co-funded projects, it creates opportunities for know-how development and exploitation, and contributes, in a decisive manner, to the development of Research and Science in Greece and abroad.

    National Infrastructures for Research and Technology – Networking Research and Education

    www.grnet.gr, hpc.grnet.gr
    events.prace-ri.eu/event/945/
    Dec 11 9:00 to Dec 12 16:00
    The Flemish Supercomputing Center (VSC) organizes an MPI course at Heverlee (Belgium). This course is offered with PRACE support by SURFsara (The Netherlands) and it is based on a PATC course developed by Dr. Rolf Rabenseifner (HLRS, Stuttgart).

    For registration and more information, please visit the following page:

    www.vscentrum.be/mpi
    events.prace-ri.eu/event/966/
    Dec 11 9:00 to Dec 12 17:00
    Description

    This course gives a practical introduction to deep learning, convolutional and recurrent neural networks, GPU computing, and tools to train and apply deep neural networks for natural language processing, images, and other applications.

    The course consists of lectures and hands-on exercises. TensorFlow 2, Keras, and PyTorch  will be used in the exercise sessions. CSC's Notebooks environment will be used on the first day of the course, and the new Puhti-AI partition on the second day.

    Learning outcome

    After the course the participants should have the skills and knowledge needed to begin applying deep learning for different tasks and utilizing the GPU resources available at CSC for training and deploying their own neural networks.

    Prerequisites

    The participants are assumed to have working knowledge of Python and suitable background in data analysis, machine learning, or a related field. Previous experience in deep learning is not required, but the fundamentals of machine learning are not covered on this course.  Basic knowledge of a Linux/Unix environment will be assumed.

    Agenda (tentative)

    Day 1, Thursday 12.12



       09.00 – 11.00 Introduction to deep learning and to Notebooks


       11.00 – 12.00 Multi-layer perceptrons


       12.00 – 13.00 Lunch


       13.00 – 14.30 Image data and convolutional neural networks


       14.30 – 16.00 Text data, recurrent neural networks, and attention



    Day 2, Friday 13.12



       09.00 – 10.30 Deep learning frameworks, GPUs, batch jobs


       10.30 – 12.00 Image classification exercises


       12.00 – 13.00 Lunch


       13.00 – 14.00 Text categorization exercises


       14.00 – 16.00 Cloud, using multiple GPUs



    Coffee will be served both for the morning and afternoon sessions

    Lecturers: 

    Markus Koskela (CSC),  Mats Sjöberg (CSC)

     

    Language:  English
    Price:          Free of charge
    events.prace-ri.eu/event/941/
    Dec 12 8:00 to Dec 13 15:00
    Description

    This course gives a practical introduction to deep learning, convolutional and recurrent neural networks, GPU computing, and tools to train and apply deep neural networks for natural language processing, images, and other applications.

    The course consists of lectures and hands-on exercises. TensorFlow 2, Keras, and PyTorch  will be used in the exercise sessions. CSC's Notebooks environment will be used on the first day of the course, and the new Puhti-AI partition on the second day.

    Learning outcome

    After the course the participants should have the skills and knowledge needed to begin applying deep learning for different tasks and utilizing the GPU resources available at CSC for training and deploying their own neural networks.

    Prerequisites

    The participants are assumed to have working knowledge of Python and suitable background in data analysis, machine learning, or a related field. Previous experience in deep learning is not required, but the fundamentals of machine learning are not covered on this course.  Basic knowledge of a Linux/Unix environment will be assumed.

    Agenda (tentative)

    Day 1, Thursday 12.12



       09.00 – 11.00 Introduction to deep learning and to Notebooks


       11.00 – 12.00 Multi-layer perceptrons


       12.00 – 13.00 Lunch


       13.00 – 14.30 Image data and convolutional neural networks


       14.30 – 16.00 Text data, recurrent neural networks, and attention



    Day 2, Friday 13.12



       09.00 – 10.30 Deep learning frameworks, GPUs, batch jobs


       10.30 – 12.00 Image classification exercises


       12.00 – 13.00 Lunch


       13.00 – 14.00 Text categorization exercises


       14.00 – 16.00 Cloud, using multiple GPUs



    Coffee will be served both for the morning and afternoon sessions

    Lecturers: 

    Markus Koskela (CSC),  Mats Sjöberg (CSC)

     

    Language:  English
    Price:          Free of charge
    events.prace-ri.eu/event/941/
    Dec 12 8:00 to Dec 13 15:00
    14
     
    15
     
    16
     
    17
     
    The rapid growth of artificial intelligence and data science has made scikit-learn one of the most popular Python libraries. The tutorial will present the main components of scikit-learn, covering aspects such as standard classifiers and regressors, cross-validation, or pipeline construction, with examples from various fields of application. Hands-on sessions will focus on medical applications, such as classification for computer-aided diagnosis or regression for the prediction of clinical scores.

    Learning outcomes :

    Ability to solve a real-world machine learning problem with scikit-learn

    Prerequisites :


    Basic knowledge of Python (pandas, numpy)
    Notions of machine learning
    No prior medical knowledge is required

    events.prace-ri.eu/event/933/
    Dec 18 9:30 18:00
    19
     
    20
     
    21
     
    22
     
    23
     
    24
     
    25
     
    26
     
    27
     
    28
     
    29
     
    30
     
    31
     
     

     


    PTC events this month:

    December 2019
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    9
     
    10
     
    11
     
    12
     
    13
     
    14
     
    15
     
    16
     
    17
     
    18
     
    19
     
    20
     
    21
     
    22
     
    23
     
    24
     
    25
     
    26
     
    27
     
    28
     
    29
     
    30
     
    31