• PRACE Training Centres (PTCs)

  • PRACE operates ten PRACE Training Centres (PTCs) and they have established a state-of-the-art curriculum for training in HPC and scientific computing. PTCs carry out and coordinate training and education activities that enable both European academic researchers and European industry to utilise the computational infrastructure available through PRACE and provide top-class education and training opportunities for computational scientists in Europe.
    With approximately 100 training events each year, the ten PRACE Training Centres (PTCs) are based at:

    PTC training events are advertised on the following pages. Registration is free and open to all (pending availability):
    https://events.prace-ri.eu/category/2/

    The following figure depicts the location of the PTC centers throughout Europe.
    PATC, PTC location

    PATC events this month:

    June 2019
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    Overview

    Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC.

    This new 4-days workshop offered for the first time at LRZ combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC.

    The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.

    The workshop is co-organized by LRZ and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE). Since 2012 LRZ as part of GCS is one of currently 10 PRACE Training Centres which serve as European hubs and key drivers of advanced high-quality training for researchers working in the computational sciences.

    NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.

     All instructors are NVIDIA certified University Ambassadors.

    Agenda

    1st day: Fundamentals of Deep Learning for Multiple Data Types

    This day explores how convolutional and recurrent neural networks can be combined to generate effective descriptions of content within images and video clips.

    Learn how to train a network using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset to generate captions from images and video by:


    Implementing deep learning workflows like image segmentation and text generation
    Comparing and contrasting data types, workflows, and frameworks
    Combining computer vision and natural language processing


    Upon completion, you’ll be able to solve deep learning problems that require multiple types of data inputs.

    2nd day: Fundamentals of Deep Learning for Multi-GPUs

    The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

    On the 2nd day we will teach you how to use multiple GPUs to train neural networks. You'll learn:



    Approaches to multi-GPUs training


    Algorithmic and engineering challenges to large-scale training


    Key techniques used to overcome the challenges mentioned above



    Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow.

    3rd day: Fundamentals of Accelerated Computing with CUDA C/C++

    The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. On the 3rd day you experience C/C++ application acceleration by:



    Accelerating CPU-only applications to run their latent parallelism on GPUs


    Utilizing essential CUDA memory management techniques to optimize accelerated applications


    Exposing accelerated application potential for concurrency and exploiting it with CUDA streams


    Leveraging command line and visual profiling to guide and check your work



    Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast.

    4th day: Fundamentals of Accelerated Computing with OpenACC

    On the last day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. You’ll learn:



    How to profile and optimize your CPU-only applications to identify hot spots for acceleration


    How to use OpenACC directives to GPU accelerate your codebase


    How to optimize data movement between the CPU and GPU accelerator



    Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications.

    Important information

    You must bring your own laptop to this workshop!

    After you are accepted, please create an account under courses.nvidia.com/join .

    Ensure your laptop will run smoothly by going to websocketstest.com/ Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80).If there are issues with WebSockets, try updating your browser. If you have any questions, please contact Marjut Dieringer at mdieringer"at"nvidia.com. 

    PRACE Training and Education

    The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high-impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society.  PRACE has an extensive education and training effort through seasonal schools, workshops and scientific and industrial seminars throughout Europe. Seasonal Schools target broad HPC audiences, whereas workshops are focused on particular technologies, tools or disciplines or research areas.

    NVIDIA Deep Learning Institute

    The NVIDIA Deep Learning Institute delivers hands-on training for developers, data scientists, and engineers. The program is designed to help you get started with training, optimizing, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics.


    events.prace-ri.eu/event/860/
    Jun 3 9:00 to Jun 6 17:00
    Overview

    Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC.

    This new 4-days workshop offered for the first time at LRZ combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC.

    The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.

    The workshop is co-organized by LRZ and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE). Since 2012 LRZ as part of GCS is one of currently 10 PRACE Training Centres which serve as European hubs and key drivers of advanced high-quality training for researchers working in the computational sciences.

    NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.

     All instructors are NVIDIA certified University Ambassadors.

    Agenda

    1st day: Fundamentals of Deep Learning for Multiple Data Types

    This day explores how convolutional and recurrent neural networks can be combined to generate effective descriptions of content within images and video clips.

    Learn how to train a network using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset to generate captions from images and video by:


    Implementing deep learning workflows like image segmentation and text generation
    Comparing and contrasting data types, workflows, and frameworks
    Combining computer vision and natural language processing


    Upon completion, you’ll be able to solve deep learning problems that require multiple types of data inputs.

    2nd day: Fundamentals of Deep Learning for Multi-GPUs

    The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

    On the 2nd day we will teach you how to use multiple GPUs to train neural networks. You'll learn:



    Approaches to multi-GPUs training


    Algorithmic and engineering challenges to large-scale training


    Key techniques used to overcome the challenges mentioned above



    Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow.

    3rd day: Fundamentals of Accelerated Computing with CUDA C/C++

    The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. On the 3rd day you experience C/C++ application acceleration by:



    Accelerating CPU-only applications to run their latent parallelism on GPUs


    Utilizing essential CUDA memory management techniques to optimize accelerated applications


    Exposing accelerated application potential for concurrency and exploiting it with CUDA streams


    Leveraging command line and visual profiling to guide and check your work



    Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast.

    4th day: Fundamentals of Accelerated Computing with OpenACC

    On the last day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. You’ll learn:



    How to profile and optimize your CPU-only applications to identify hot spots for acceleration


    How to use OpenACC directives to GPU accelerate your codebase


    How to optimize data movement between the CPU and GPU accelerator



    Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications.

    Important information

    You must bring your own laptop to this workshop!

    After you are accepted, please create an account under courses.nvidia.com/join .

    Ensure your laptop will run smoothly by going to websocketstest.com/ Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80).If there are issues with WebSockets, try updating your browser. If you have any questions, please contact Marjut Dieringer at mdieringer"at"nvidia.com. 

    PRACE Training and Education

    The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high-impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society.  PRACE has an extensive education and training effort through seasonal schools, workshops and scientific and industrial seminars throughout Europe. Seasonal Schools target broad HPC audiences, whereas workshops are focused on particular technologies, tools or disciplines or research areas.

    NVIDIA Deep Learning Institute

    The NVIDIA Deep Learning Institute delivers hands-on training for developers, data scientists, and engineers. The program is designed to help you get started with training, optimizing, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics.


    events.prace-ri.eu/event/860/
    Jun 3 9:00 to Jun 6 17:00
    Overview

    Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC.

    This new 4-days workshop offered for the first time at LRZ combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC.

    The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.

    The workshop is co-organized by LRZ and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE). Since 2012 LRZ as part of GCS is one of currently 10 PRACE Training Centres which serve as European hubs and key drivers of advanced high-quality training for researchers working in the computational sciences.

    NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.

     All instructors are NVIDIA certified University Ambassadors.

    Agenda

    1st day: Fundamentals of Deep Learning for Multiple Data Types

    This day explores how convolutional and recurrent neural networks can be combined to generate effective descriptions of content within images and video clips.

    Learn how to train a network using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset to generate captions from images and video by:


    Implementing deep learning workflows like image segmentation and text generation
    Comparing and contrasting data types, workflows, and frameworks
    Combining computer vision and natural language processing


    Upon completion, you’ll be able to solve deep learning problems that require multiple types of data inputs.

    2nd day: Fundamentals of Deep Learning for Multi-GPUs

    The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

    On the 2nd day we will teach you how to use multiple GPUs to train neural networks. You'll learn:



    Approaches to multi-GPUs training


    Algorithmic and engineering challenges to large-scale training


    Key techniques used to overcome the challenges mentioned above



    Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow.

    3rd day: Fundamentals of Accelerated Computing with CUDA C/C++

    The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. On the 3rd day you experience C/C++ application acceleration by:



    Accelerating CPU-only applications to run their latent parallelism on GPUs


    Utilizing essential CUDA memory management techniques to optimize accelerated applications


    Exposing accelerated application potential for concurrency and exploiting it with CUDA streams


    Leveraging command line and visual profiling to guide and check your work



    Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast.

    4th day: Fundamentals of Accelerated Computing with OpenACC

    On the last day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. You’ll learn:



    How to profile and optimize your CPU-only applications to identify hot spots for acceleration


    How to use OpenACC directives to GPU accelerate your codebase


    How to optimize data movement between the CPU and GPU accelerator



    Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications.

    Important information

    You must bring your own laptop to this workshop!

    After you are accepted, please create an account under courses.nvidia.com/join .

    Ensure your laptop will run smoothly by going to websocketstest.com/ Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80).If there are issues with WebSockets, try updating your browser. If you have any questions, please contact Marjut Dieringer at mdieringer"at"nvidia.com. 

    PRACE Training and Education

    The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high-impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society.  PRACE has an extensive education and training effort through seasonal schools, workshops and scientific and industrial seminars throughout Europe. Seasonal Schools target broad HPC audiences, whereas workshops are focused on particular technologies, tools or disciplines or research areas.

    NVIDIA Deep Learning Institute

    The NVIDIA Deep Learning Institute delivers hands-on training for developers, data scientists, and engineers. The program is designed to help you get started with training, optimizing, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics.


    events.prace-ri.eu/event/860/
    Jun 3 9:00 to Jun 6 17:00
    Overview

    Learn how to train and deploy a neural network to solve real-world problems, how to generate effective descriptions of content within images and video clips, how to effectively parallelize training of deep neural networks on Multi-GPUs and how to accelerate your applications with CUDA C/C++ and OpenACC.

    This new 4-days workshop offered for the first time at LRZ combines lectures about fundamentals of Deep Learning for Multiple Data Types and Multi-GPUs with lectures about Accelerated Computing with CUDA C/C++ and OpenACC.

    The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.

    The workshop is co-organized by LRZ and NVIDIA Deep Learning Institute (DLI) for the Partnership for Advanced Computing in Europe (PRACE). Since 2012 LRZ as part of GCS is one of currently 10 PRACE Training Centres which serve as European hubs and key drivers of advanced high-quality training for researchers working in the computational sciences.

    NVIDIA DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.

     All instructors are NVIDIA certified University Ambassadors.

    Agenda

    1st day: Fundamentals of Deep Learning for Multiple Data Types

    This day explores how convolutional and recurrent neural networks can be combined to generate effective descriptions of content within images and video clips.

    Learn how to train a network using TensorFlow and the Microsoft Common Objects in Context (COCO) dataset to generate captions from images and video by:


    Implementing deep learning workflows like image segmentation and text generation
    Comparing and contrasting data types, workflows, and frameworks
    Combining computer vision and natural language processing


    Upon completion, you’ll be able to solve deep learning problems that require multiple types of data inputs.

    2nd day: Fundamentals of Deep Learning for Multi-GPUs

    The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

    On the 2nd day we will teach you how to use multiple GPUs to train neural networks. You'll learn:



    Approaches to multi-GPUs training


    Algorithmic and engineering challenges to large-scale training


    Key techniques used to overcome the challenges mentioned above



    Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow.

    3rd day: Fundamentals of Accelerated Computing with CUDA C/C++

    The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. On the 3rd day you experience C/C++ application acceleration by:



    Accelerating CPU-only applications to run their latent parallelism on GPUs


    Utilizing essential CUDA memory management techniques to optimize accelerated applications


    Exposing accelerated application potential for concurrency and exploiting it with CUDA streams


    Leveraging command line and visual profiling to guide and check your work



    Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA tools and techniques. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast.

    4th day: Fundamentals of Accelerated Computing with OpenACC

    On the last day you learn the basics of OpenACC, a high-level programming language for programming on GPUs. Discover how to accelerate the performance of your applications beyond the limits of CPU-only programming with simple pragmas. You’ll learn:



    How to profile and optimize your CPU-only applications to identify hot spots for acceleration


    How to use OpenACC directives to GPU accelerate your codebase


    How to optimize data movement between the CPU and GPU accelerator



    Upon completion, you'll be ready to use OpenACC to GPU accelerate CPU-only applications.

    Important information

    You must bring your own laptop to this workshop!

    After you are accepted, please create an account under courses.nvidia.com/join .

    Ensure your laptop will run smoothly by going to websocketstest.com/ Make sure that WebSockets work for you by seeing under Environment, WebSockets is supported and Data Receive, Send and Echo Test all check Yes under WebSockets (Port 80).If there are issues with WebSockets, try updating your browser. If you have any questions, please contact Marjut Dieringer at mdieringer"at"nvidia.com. 

    PRACE Training and Education

    The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high-impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society.  PRACE has an extensive education and training effort through seasonal schools, workshops and scientific and industrial seminars throughout Europe. Seasonal Schools target broad HPC audiences, whereas workshops are focused on particular technologies, tools or disciplines or research areas.

    NVIDIA Deep Learning Institute

    The NVIDIA Deep Learning Institute delivers hands-on training for developers, data scientists, and engineers. The program is designed to help you get started with training, optimizing, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics.


    events.prace-ri.eu/event/860/
    Jun 3 9:00 to Jun 6 17:00
    7
     
    8
     
    9
     
    Description:

    The increasing amount of scientific data collected through sensors or computational simulations can take advantage of new techniques for being processed in order to extract new insights out of raw data. The purpose of this one-week school is to present researchers and scientists with methods, tools and techniques for exploring and mining, large data sets using Cineca high performance resources. The school is an introductory set of lectures aimed at training beginner participants in the application of relevant statistical, machine and deep learning algorithms to create classification and predictive models using Cineca resources to execute efficient processing jobs. The school will consist of introductory lectures held by data scientists, and hands-on sessions.

    Skills:

    At the end of the course, the student will possess and know how to use the following skills:

    • Use of Cineca HPC resources

    • Machine Learning algorithms and libraries 

    • Deep Learning frameworks

    Target Audience:

    Young students, PhD, and researchers in computational sciences and scientific areas with different backgrounds, looking for new technologies and methods to process and analyse large amount of data.

    Prerequisites:

    Participants must have basic knowledge in statistics, on the fundamentals of computer programming with python and R and in using GNU/Linux-based systems.

    Grant
    The lunch for the five days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures.

    Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date.

    The number of participants is limited to 20 students. Applicants will be selected according to their experience, qualification and scientific interest BASED ON WHAT WRITTEN IN THE "Reason for participation" FIELD OF THE REGISTRATION FORM. 

     

    APPLICATION DEADLINE

    May 3rd, 2019. 

    STUDENTS WILL BE NOTIFIED ON THEIR ADMISSION OR NOT WITH AN EMAIL ON MONDAY MAY,13TH. 

    Attendance is FREE. 

     
    events.prace-ri.eu/event/832/
    Jun 10 9:00 to Jun 14 18:00
    Description:

    The increasing amount of scientific data collected through sensors or computational simulations can take advantage of new techniques for being processed in order to extract new insights out of raw data. The purpose of this one-week school is to present researchers and scientists with methods, tools and techniques for exploring and mining, large data sets using Cineca high performance resources. The school is an introductory set of lectures aimed at training beginner participants in the application of relevant statistical, machine and deep learning algorithms to create classification and predictive models using Cineca resources to execute efficient processing jobs. The school will consist of introductory lectures held by data scientists, and hands-on sessions.

    Skills:

    At the end of the course, the student will possess and know how to use the following skills:

    • Use of Cineca HPC resources

    • Machine Learning algorithms and libraries 

    • Deep Learning frameworks

    Target Audience:

    Young students, PhD, and researchers in computational sciences and scientific areas with different backgrounds, looking for new technologies and methods to process and analyse large amount of data.

    Prerequisites:

    Participants must have basic knowledge in statistics, on the fundamentals of computer programming with python and R and in using GNU/Linux-based systems.

    Grant
    The lunch for the five days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures.

    Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date.

    The number of participants is limited to 20 students. Applicants will be selected according to their experience, qualification and scientific interest BASED ON WHAT WRITTEN IN THE "Reason for participation" FIELD OF THE REGISTRATION FORM. 

     

    APPLICATION DEADLINE

    May 3rd, 2019. 

    STUDENTS WILL BE NOTIFIED ON THEIR ADMISSION OR NOT WITH AN EMAIL ON MONDAY MAY,13TH. 

    Attendance is FREE. 

     
    events.prace-ri.eu/event/832/
    Jun 10 9:00 to Jun 14 18:00
    Annotation

    Explore the fundamentals of deep learning by training neural networks and using results to improve performance and capabilities.

    During this day, you’ll learn the basics of deep learning by training and deploying neural networks. You’ll learn how to:


    Implement common deep learning workflows, such as image classification and object detection,
    Experiment with data, training parameters, network structure, and other strategies to increase performance and capability,
    Deploy your neural networks to start solving real-world problems.


    Upon completion, you’ll be able to start solving problems on your own with deep learning.

    NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning.

    This workshop contains lectures and hands-on exercises about fundamentals of Deep Learning for Computer Vision, to learn how to train and deploy a neural network to solve real-world problems.

    The lectures are interleaved with many hands-on sessions using Jupyter Notebooks. The exercises will be done on a fully configured GPU-accelerated workstation in the cloud.

    This workshop is organized by IT4Innovations which is a certified Nvidia DLI University Ambassador.

    Target audience and Purpose of the course

    Anyone interested in basics of deep learning. Upon completion, you’ll be able to start solving problems on your own with deep learning.

    This course is only offered to academia (see details below in section Capacity and Fees).

    About the tutor(s)

    Georg Zitzlsberger is a research specialist for Machine and Deep Learning. He recently received his certification from Nvidia as a University Ambassador of the Nvidia Deep Learning Institute (DLI) program. This certification allows him to offer Nvidia DLI courses to academic users of IT4Innovations' HPC services.

    NVIDIA Deep Learning Institute

    The NVIDIA Deep Learning Institute delivers hands-on training for developers, data scientists, and engineers. The program is designed to help you get started with training, optimizing, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics.

    Acknowledgement

    This course is supported by the PRACE-5IP project – the European Union's Horizon 2020 research and innovation programme under grant agreement No. 730913 – and is sponsored by Nvidia as part of the Nvidia Deep Learning Institute (DLI) University Ambassador program.

    This work was also supported by The Ministry of Education, Youth and Sports from the Large Infrastructures for Research, Experimental Development and Innovations project ”IT4Innovations National Supercomputing Center – LM2015070”.
    events.prace-ri.eu/event/878/
    Jun 11 8:30 17:15
    In recent years machine learning and deep learning techniques in particular have developed tremendously. Neural networks are being used in more and more application domains going from computer vision to speech recognition, and even replacing parts of the compute pipeline for scientific HPC applications.

    IMPORTANT INFORMATION: WAITING LIST

    If the course gets fully booked, no more registrations are accepted through this website. However, you can be included in the waiting list: for that, please send an email to training@surfsara.nl and you'll be informed when a place becomes available.

    In this course you will start from the essential concepts up to the efficient use of HPC infrastructures to get the best performance out of different machine learning tools. Several hands-on sessions are set up to present general algorithms and some scalability challenges involved in when using both large-scale data and large-scale models.
    events.prace-ri.eu/event/870/
    Jun 11 9:00 to Jun 12 17:00
    Description:

    The increasing amount of scientific data collected through sensors or computational simulations can take advantage of new techniques for being processed in order to extract new insights out of raw data. The purpose of this one-week school is to present researchers and scientists with methods, tools and techniques for exploring and mining, large data sets using Cineca high performance resources. The school is an introductory set of lectures aimed at training beginner participants in the application of relevant statistical, machine and deep learning algorithms to create classification and predictive models using Cineca resources to execute efficient processing jobs. The school will consist of introductory lectures held by data scientists, and hands-on sessions.

    Skills:

    At the end of the course, the student will possess and know how to use the following skills:

    • Use of Cineca HPC resources

    • Machine Learning algorithms and libraries 

    • Deep Learning frameworks

    Target Audience:

    Young students, PhD, and researchers in computational sciences and scientific areas with different backgrounds, looking for new technologies and methods to process and analyse large amount of data.

    Prerequisites:

    Participants must have basic knowledge in statistics, on the fundamentals of computer programming with python and R and in using GNU/Linux-based systems.

    Grant
    The lunch for the five days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures.

    Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date.

    The number of participants is limited to 20 students. Applicants will be selected according to their experience, qualification and scientific interest BASED ON WHAT WRITTEN IN THE "Reason for participation" FIELD OF THE REGISTRATION FORM. 

     

    APPLICATION DEADLINE

    May 3rd, 2019. 

    STUDENTS WILL BE NOTIFIED ON THEIR ADMISSION OR NOT WITH AN EMAIL ON MONDAY MAY,13TH. 

    Attendance is FREE. 

     
    events.prace-ri.eu/event/832/
    Jun 10 9:00 to Jun 14 18:00
    In recent years machine learning and deep learning techniques in particular have developed tremendously. Neural networks are being used in more and more application domains going from computer vision to speech recognition, and even replacing parts of the compute pipeline for scientific HPC applications.

    IMPORTANT INFORMATION: WAITING LIST

    If the course gets fully booked, no more registrations are accepted through this website. However, you can be included in the waiting list: for that, please send an email to training@surfsara.nl and you'll be informed when a place becomes available.

    In this course you will start from the essential concepts up to the efficient use of HPC infrastructures to get the best performance out of different machine learning tools. Several hands-on sessions are set up to present general algorithms and some scalability challenges involved in when using both large-scale data and large-scale models.
    events.prace-ri.eu/event/870/
    Jun 11 9:00 to Jun 12 17:00
    Description:

    The increasing amount of scientific data collected through sensors or computational simulations can take advantage of new techniques for being processed in order to extract new insights out of raw data. The purpose of this one-week school is to present researchers and scientists with methods, tools and techniques for exploring and mining, large data sets using Cineca high performance resources. The school is an introductory set of lectures aimed at training beginner participants in the application of relevant statistical, machine and deep learning algorithms to create classification and predictive models using Cineca resources to execute efficient processing jobs. The school will consist of introductory lectures held by data scientists, and hands-on sessions.

    Skills:

    At the end of the course, the student will possess and know how to use the following skills:

    • Use of Cineca HPC resources

    • Machine Learning algorithms and libraries 

    • Deep Learning frameworks

    Target Audience:

    Young students, PhD, and researchers in computational sciences and scientific areas with different backgrounds, looking for new technologies and methods to process and analyse large amount of data.

    Prerequisites:

    Participants must have basic knowledge in statistics, on the fundamentals of computer programming with python and R and in using GNU/Linux-based systems.

    Grant
    The lunch for the five days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures.

    Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date.

    The number of participants is limited to 20 students. Applicants will be selected according to their experience, qualification and scientific interest BASED ON WHAT WRITTEN IN THE "Reason for participation" FIELD OF THE REGISTRATION FORM. 

     

    APPLICATION DEADLINE

    May 3rd, 2019. 

    STUDENTS WILL BE NOTIFIED ON THEIR ADMISSION OR NOT WITH AN EMAIL ON MONDAY MAY,13TH. 

    Attendance is FREE. 

     
    events.prace-ri.eu/event/832/
    Jun 10 9:00 to Jun 14 18:00
    Description:

    The increasing amount of scientific data collected through sensors or computational simulations can take advantage of new techniques for being processed in order to extract new insights out of raw data. The purpose of this one-week school is to present researchers and scientists with methods, tools and techniques for exploring and mining, large data sets using Cineca high performance resources. The school is an introductory set of lectures aimed at training beginner participants in the application of relevant statistical, machine and deep learning algorithms to create classification and predictive models using Cineca resources to execute efficient processing jobs. The school will consist of introductory lectures held by data scientists, and hands-on sessions.

    Skills:

    At the end of the course, the student will possess and know how to use the following skills:

    • Use of Cineca HPC resources

    • Machine Learning algorithms and libraries 

    • Deep Learning frameworks

    Target Audience:

    Young students, PhD, and researchers in computational sciences and scientific areas with different backgrounds, looking for new technologies and methods to process and analyse large amount of data.

    Prerequisites:

    Participants must have basic knowledge in statistics, on the fundamentals of computer programming with python and R and in using GNU/Linux-based systems.

    Grant
    The lunch for the five days will be offered to all the participants and some grants are available. The only requirement to be eligible is to be not funded by your institution to attend the course and to work or live in an institute outside the Roma area. The grant  will be 300 euros for students working and living outside Italy and 150 euros for students working and living in Italy. Some documentation will be required and the grant will be paid only after a certified presence of minimum 80% of the lectures.

    Further information about how to request the grant, will be provided at the confirmation of the course: about 3 weeks before the starting date.

    The number of participants is limited to 20 students. Applicants will be selected according to their experience, qualification and scientific interest BASED ON WHAT WRITTEN IN THE "Reason for participation" FIELD OF THE REGISTRATION FORM. 

     

    APPLICATION DEADLINE

    May 3rd, 2019. 

    STUDENTS WILL BE NOTIFIED ON THEIR ADMISSION OR NOT WITH AN EMAIL ON MONDAY MAY,13TH. 

    Attendance is FREE. 

     
    events.prace-ri.eu/event/832/
    Jun 10 9:00 to Jun 14 18:00
    15
     
    16
     
    Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly.

    This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools.

    The following topics will be covered:


    Interactive parallel programming with IPython
    Profiling and optimization
    High-performance NumPy
    Just-in-time compilation with numba
    Distributed-memory parallel programming with Python and MPI
    Bindings to other programming languages and HPC libraries
    Interfaces to GPUs


    This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC.

    Prerequisites: Good working knowledge of Python and NumPy

    Application
    Registrations are only considered until 15 May 2019 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.

    Instructors: Dr. Jan Meinke, Dr. Olav Zimmermann, JSC

    Contact
    For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de
    events.prace-ri.eu/event/825/
    Jun 17 9:00 to Jun 19 16:30
    This 3-day course is focused on providing an introduction to parallel programming using the most widely used approaches: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).

     

    As a participant, you have already some familiarity with C or Fortran programming, and the course will take you from the beginners level up to the point of being able to start your own parallel application developments. Each session during the first two and half days includes hands-on exercises to facilitate the understanding of the different constructs.

     

    Do you already have some code that you need to parallelize or would you like to talk to the experts about how to go parallel? Then the last afternoon session you'll be having the support of SURFsara supercomputing advisors to guide you on how to develop your specific parallelization problem. Please bring your own requirements (or even your own code) for discussion and get direct support from the experts!
    events.prace-ri.eu/event/828/
    Jun 17 9:00 to Jun 19 17:35
    Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly.

    This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools.

    The following topics will be covered:


    Interactive parallel programming with IPython
    Profiling and optimization
    High-performance NumPy
    Just-in-time compilation with numba
    Distributed-memory parallel programming with Python and MPI
    Bindings to other programming languages and HPC libraries
    Interfaces to GPUs


    This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC.

    Prerequisites: Good working knowledge of Python and NumPy

    Application
    Registrations are only considered until 15 May 2019 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.

    Instructors: Dr. Jan Meinke, Dr. Olav Zimmermann, JSC

    Contact
    For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de
    events.prace-ri.eu/event/825/
    Jun 17 9:00 to Jun 19 16:30
    This 3-day course is focused on providing an introduction to parallel programming using the most widely used approaches: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).

     

    As a participant, you have already some familiarity with C or Fortran programming, and the course will take you from the beginners level up to the point of being able to start your own parallel application developments. Each session during the first two and half days includes hands-on exercises to facilitate the understanding of the different constructs.

     

    Do you already have some code that you need to parallelize or would you like to talk to the experts about how to go parallel? Then the last afternoon session you'll be having the support of SURFsara supercomputing advisors to guide you on how to develop your specific parallelization problem. Please bring your own requirements (or even your own code) for discussion and get direct support from the experts!
    events.prace-ri.eu/event/828/
    Jun 17 9:00 to Jun 19 17:35
    Python is increasingly used in high-performance computing projects. It can be used either as a high-level interface to existing HPC applications and libraries, as embedded interpreter, or directly.

    This course combines lectures and hands-on sessions. We will show how Python can be used on parallel architectures and how to optimize critical parts of the kernel using various tools.

    The following topics will be covered:


    Interactive parallel programming with IPython
    Profiling and optimization
    High-performance NumPy
    Just-in-time compilation with numba
    Distributed-memory parallel programming with Python and MPI
    Bindings to other programming languages and HPC libraries
    Interfaces to GPUs


    This course is aimed at scientists who wish to explore the productivity gains made possible by Python for HPC.

    Prerequisites: Good working knowledge of Python and NumPy

    Application
    Registrations are only considered until 15 May 2019 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.

    Instructors: Dr. Jan Meinke, Dr. Olav Zimmermann, JSC

    Contact
    For any questions concerning the course please send an e-mail to j.meinke@fz-juelich.de
    events.prace-ri.eu/event/825/
    Jun 17 9:00 to Jun 19 16:30
    This 3-day course is focused on providing an introduction to parallel programming using the most widely used approaches: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).

     

    As a participant, you have already some familiarity with C or Fortran programming, and the course will take you from the beginners level up to the point of being able to start your own parallel application developments. Each session during the first two and half days includes hands-on exercises to facilitate the understanding of the different constructs.

     

    Do you already have some code that you need to parallelize or would you like to talk to the experts about how to go parallel? Then the last afternoon session you'll be having the support of SURFsara supercomputing advisors to guide you on how to develop your specific parallelization problem. Please bring your own requirements (or even your own code) for discussion and get direct support from the experts!
    events.prace-ri.eu/event/828/
    Jun 17 9:00 to Jun 19 17:35
     

    Turbulence and heat transfer applied to HPC-related civil nuclear phenomena.

    Introduction to Code_Saturne

    Electricity generation is fundamentally a thermodynamic process. In a nuclear power plant, the prediction of fluid flow and heat transfer is of vital importance for the plant performance and for safety compliance. This course will focus on the use of Computational Fluid Dynamics (CFD) for the prediction of fluid flow and heat transfer, including turbulence modelling, near wall modelling and conjugate heat transfer.

    The course will run for 2 days being a mixture of lectures and tutorials for nuclear internal flows. The open-source HPC software Code_Saturne will be used by the participants to run large scale simulations using the UK national facility ARCHER.

    This course is organised by the University of Manchester, University of Sheffield, EDF Energy and STFC Daresbury Laboratory, and has the support of the UKFN SIG - Nuclear Thermal Hydraulics.

    Timetable

    Wednesday 19th of June 2019  (C1 George Begg building):


    09:00    Registration
    09:30 RANS Modelling of turbulent flows
    10:25 Near wall turbulence
    11:20 Coffee break
    11:35 Turbulent heat transfer modelling and applications
    12:30 Lunch
    13:30 Introduction to Code_Saturne
    14:00 Tutorial: Laminar tube bundles using the GUI
    15:30 HPC presentation and introduction to ARCHER
    16:00 Tutorial: LES of tube bundles
    17:30 End of day



    Thursday 20th of June 2019 : Practical session in the computer cluster in George Begg


    09:00 Tutorial: Post processing of LES results
    09:30 Use of subroutines in Code_Saturne
    10:00 Tutorial: LES of tube bundles using subroutines
    11:00 Coffee break
    11:15 Tutorial: adding heat transfer
    12:30 Lunch


          C2 George Begg building


    13:30 LES/DNS and hybrid methods
    14:20 Best practice guidelines and errors in CFD
    15:00 Coffee break
    15:15 Novel methods: Coarse CFD for nuclear applications
    16:00 End of day and course


    Location

    The course will be held at University of Manchester; rooming as show in Timetable.

    www.manchester.ac.uk/d.....id=14

    Interactive map.

     

     
    events.prace-ri.eu/event/865/
    Jun 19 10:00 to Jun 20 18:30
     

    Turbulence and heat transfer applied to HPC-related civil nuclear phenomena.

    Introduction to Code_Saturne

    Electricity generation is fundamentally a thermodynamic process. In a nuclear power plant, the prediction of fluid flow and heat transfer is of vital importance for the plant performance and for safety compliance. This course will focus on the use of Computational Fluid Dynamics (CFD) for the prediction of fluid flow and heat transfer, including turbulence modelling, near wall modelling and conjugate heat transfer.

    The course will run for 2 days being a mixture of lectures and tutorials for nuclear internal flows. The open-source HPC software Code_Saturne will be used by the participants to run large scale simulations using the UK national facility ARCHER.

    This course is organised by the University of Manchester, University of Sheffield, EDF Energy and STFC Daresbury Laboratory, and has the support of the UKFN SIG - Nuclear Thermal Hydraulics.

    Timetable

    Wednesday 19th of June 2019  (C1 George Begg building):


    09:00    Registration
    09:30 RANS Modelling of turbulent flows
    10:25 Near wall turbulence
    11:20 Coffee break
    11:35 Turbulent heat transfer modelling and applications
    12:30 Lunch
    13:30 Introduction to Code_Saturne
    14:00 Tutorial: Laminar tube bundles using the GUI
    15:30 HPC presentation and introduction to ARCHER
    16:00 Tutorial: LES of tube bundles
    17:30 End of day



    Thursday 20th of June 2019 : Practical session in the computer cluster in George Begg


    09:00 Tutorial: Post processing of LES results
    09:30 Use of subroutines in Code_Saturne
    10:00 Tutorial: LES of tube bundles using subroutines
    11:00 Coffee break
    11:15 Tutorial: adding heat transfer
    12:30 Lunch


          C2 George Begg building


    13:30 LES/DNS and hybrid methods
    14:20 Best practice guidelines and errors in CFD
    15:00 Coffee break
    15:15 Novel methods: Coarse CFD for nuclear applications
    16:00 End of day and course


    Location

    The course will be held at University of Manchester; rooming as show in Timetable.

    www.manchester.ac.uk/d.....id=14

    Interactive map.

     

     
    events.prace-ri.eu/event/865/
    Jun 19 10:00 to Jun 20 18:30
    21
     
    22
     
    23
     
    We would like to invite you to the 33rd VI-HPS Tuning Workshop which
    will be held at Juelich Supercomputing Centre in Germany as part of the
    PRACE training centre curriculum.  This is the latest in a series of
    hands-on practical workshops given by tools developers for parallel
    application developers.  The Virtual Institute - High Productivity
    Supercomputing (VI-HPS) is an initiative promoting the development,
    integration and use of HPC tools: see www.vi-hps.org

    Participants are encouraged to bring their own parallel application
    codes to the workshop to analyze and tune their performance with the
    help of experts.  Analysis will focus on MPI and OpenMP, with optional
    additional use of OpenACC, OpenCL or CUDA.

    This workshop organized by VI-HPS and JSC as a PRACE training event will:


    give an overview of the VI-HPS programming tools suite
    explain the functionality of individual tools, and how to use them effectively
    offer hands-on experience and expert assistance using the tools


    The detailed program will be available on the VI-HPS training web site.

    Presentations and hands-on sessions are planned on the following topics


    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca automated trace analysis
    TAU performance system
    Vampir interactive trace analysis
    Extra-P automated performance modeling
    Paraver/Extrae/Dimemas trace analysis and performance prediction
    MAQAO performance analysis & optimisation
    MUST runtime error detection for MPI
    ARCHER runtime error detection for OpenMP
    JUBE script-based workflow execution environment
    ... and potentially others to be added


    A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.

    Prerequisites: Experience with MPI or OpenMP

    Application
    Registrations are only considered until 10 June 2019 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.

    Instructors: JSC staff members, members of the VI-HPS collaboration

    Contact
    For any questions concerning the course please send an e-mail to b.wylie@fz-juelich.de
    events.prace-ri.eu/event/827/
    Jun 24 9:00 to Jun 28 16:30
    The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations (www.mcs.anl.gov/petsc/).

    It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran).

    Learning outcomes :

    On completion of this course, the participant should
    - Be able to build and solve simple PDE examples
    - Use and compare different solvers on these examples
    - Be familiar with using the on-line documentation
    - Be able to easily explore other PETsc possibilities relevant to his/her application.

    Prerequisites :

    C or Fortran programming.
    Notions of linear algebra, as well as notions of MPI, would be an asset.
    events.prace-ri.eu/event/816/
    Jun 24 9:30 to Jun 25 17:00
    We would like to invite you to the 33rd VI-HPS Tuning Workshop which
    will be held at Juelich Supercomputing Centre in Germany as part of the
    PRACE training centre curriculum.  This is the latest in a series of
    hands-on practical workshops given by tools developers for parallel
    application developers.  The Virtual Institute - High Productivity
    Supercomputing (VI-HPS) is an initiative promoting the development,
    integration and use of HPC tools: see www.vi-hps.org

    Participants are encouraged to bring their own parallel application
    codes to the workshop to analyze and tune their performance with the
    help of experts.  Analysis will focus on MPI and OpenMP, with optional
    additional use of OpenACC, OpenCL or CUDA.

    This workshop organized by VI-HPS and JSC as a PRACE training event will:


    give an overview of the VI-HPS programming tools suite
    explain the functionality of individual tools, and how to use them effectively
    offer hands-on experience and expert assistance using the tools


    The detailed program will be available on the VI-HPS training web site.

    Presentations and hands-on sessions are planned on the following topics


    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca automated trace analysis
    TAU performance system
    Vampir interactive trace analysis
    Extra-P automated performance modeling
    Paraver/Extrae/Dimemas trace analysis and performance prediction
    MAQAO performance analysis & optimisation
    MUST runtime error detection for MPI
    ARCHER runtime error detection for OpenMP
    JUBE script-based workflow execution environment
    ... and potentially others to be added


    A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.

    Prerequisites: Experience with MPI or OpenMP

    Application
    Registrations are only considered until 10 June 2019 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.

    Instructors: JSC staff members, members of the VI-HPS collaboration

    Contact
    For any questions concerning the course please send an e-mail to b.wylie@fz-juelich.de
    events.prace-ri.eu/event/827/
    Jun 24 9:00 to Jun 28 16:30
    The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations (www.mcs.anl.gov/petsc/).

    It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran).

    Learning outcomes :

    On completion of this course, the participant should
    - Be able to build and solve simple PDE examples
    - Use and compare different solvers on these examples
    - Be familiar with using the on-line documentation
    - Be able to easily explore other PETsc possibilities relevant to his/her application.

    Prerequisites :

    C or Fortran programming.
    Notions of linear algebra, as well as notions of MPI, would be an asset.
    events.prace-ri.eu/event/816/
    Jun 24 9:30 to Jun 25 17:00
    OpenMP is the industry standard for shared-memory programming, which enables serial programs to be parallelised using compiler directives.This course is aimed at programmers seeking to deepen their understanding of OpenMP and explore some of its more recent and advanced features.

    This 3-day course will cover topics including nested parallelism, OpenMP tasks, the OpenMP memory model, performance tuning, hybrid OpenMP + MPI, OpenMP implementations, and new features in OpenMP 4.0/4.5. Hands-on practical programming exercises make up a significant, and integral, part of this course.

    Attendees should be familiar with the basics of OpenMP, including parallel regions, data scoping, work sharing directives and synchronisation constructs. Access will be given to appropriate hardware for all the exercises, although many of them can also be performed on a standard Linux laptop.

     

    Pre-course setup

    All attendees should bring their own wireless-enabled laptop set up with the standard software as detailed on our site www.archer.ac.uk/traini.....e.php. The course tutor will be able to assist with settings to connect on the day.

    Practical exercises will be done using a guest account on ARCHER. . You should also have a web browser, a pdf reader and a simple text editor.

     

    Timetable

    Day 1

    09:00 - 11:00  Lectures: OpenMP basics: Parallel regions, Worksharing, Synchronisation
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practical: Parallel regions
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: Tasks, Nested parallelism, Memory model  coherency, NUMA
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: Mandelbrot with nested loops, collapse, and tasks

    Day 2

    09:00 - 11:00  Lectures: Multicore and multithreaded CPUs, Caches, Cache  coherency, NUMA
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practicals: Streams, Coherency
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: OpenMP tips, tricks and pitfalls, Performance issues
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: MD tuning

    Day 3

    09:00 - 11:00  Lectures: OpenMP + MPI
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practicals: OpenMP + MPI
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: OpenMP 4.0/4.5 features, target offload
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: target offload

    Course Materials

    www.archer.ac.uk/traini.....php 

     

    Location

    www.manchester.ac.uk/d.....?id=1
    events.prace-ri.eu/event/875/
    Jun 25 10:00 to Jun 27 18:30
    We would like to invite you to the 33rd VI-HPS Tuning Workshop which
    will be held at Juelich Supercomputing Centre in Germany as part of the
    PRACE training centre curriculum.  This is the latest in a series of
    hands-on practical workshops given by tools developers for parallel
    application developers.  The Virtual Institute - High Productivity
    Supercomputing (VI-HPS) is an initiative promoting the development,
    integration and use of HPC tools: see www.vi-hps.org

    Participants are encouraged to bring their own parallel application
    codes to the workshop to analyze and tune their performance with the
    help of experts.  Analysis will focus on MPI and OpenMP, with optional
    additional use of OpenACC, OpenCL or CUDA.

    This workshop organized by VI-HPS and JSC as a PRACE training event will:


    give an overview of the VI-HPS programming tools suite
    explain the functionality of individual tools, and how to use them effectively
    offer hands-on experience and expert assistance using the tools


    The detailed program will be available on the VI-HPS training web site.

    Presentations and hands-on sessions are planned on the following topics


    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca automated trace analysis
    TAU performance system
    Vampir interactive trace analysis
    Extra-P automated performance modeling
    Paraver/Extrae/Dimemas trace analysis and performance prediction
    MAQAO performance analysis & optimisation
    MUST runtime error detection for MPI
    ARCHER runtime error detection for OpenMP
    JUBE script-based workflow execution environment
    ... and potentially others to be added


    A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.

    Prerequisites: Experience with MPI or OpenMP

    Application
    Registrations are only considered until 10 June 2019 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.

    Instructors: JSC staff members, members of the VI-HPS collaboration

    Contact
    For any questions concerning the course please send an e-mail to b.wylie@fz-juelich.de
    events.prace-ri.eu/event/827/
    Jun 24 9:00 to Jun 28 16:30
    OpenMP is the industry standard for shared-memory programming, which enables serial programs to be parallelised using compiler directives.This course is aimed at programmers seeking to deepen their understanding of OpenMP and explore some of its more recent and advanced features.

    This 3-day course will cover topics including nested parallelism, OpenMP tasks, the OpenMP memory model, performance tuning, hybrid OpenMP + MPI, OpenMP implementations, and new features in OpenMP 4.0/4.5. Hands-on practical programming exercises make up a significant, and integral, part of this course.

    Attendees should be familiar with the basics of OpenMP, including parallel regions, data scoping, work sharing directives and synchronisation constructs. Access will be given to appropriate hardware for all the exercises, although many of them can also be performed on a standard Linux laptop.

     

    Pre-course setup

    All attendees should bring their own wireless-enabled laptop set up with the standard software as detailed on our site www.archer.ac.uk/traini.....e.php. The course tutor will be able to assist with settings to connect on the day.

    Practical exercises will be done using a guest account on ARCHER. . You should also have a web browser, a pdf reader and a simple text editor.

     

    Timetable

    Day 1

    09:00 - 11:00  Lectures: OpenMP basics: Parallel regions, Worksharing, Synchronisation
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practical: Parallel regions
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: Tasks, Nested parallelism, Memory model  coherency, NUMA
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: Mandelbrot with nested loops, collapse, and tasks

    Day 2

    09:00 - 11:00  Lectures: Multicore and multithreaded CPUs, Caches, Cache  coherency, NUMA
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practicals: Streams, Coherency
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: OpenMP tips, tricks and pitfalls, Performance issues
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: MD tuning

    Day 3

    09:00 - 11:00  Lectures: OpenMP + MPI
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practicals: OpenMP + MPI
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: OpenMP 4.0/4.5 features, target offload
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: target offload

    Course Materials

    www.archer.ac.uk/traini.....php 

     

    Location

    www.manchester.ac.uk/d.....?id=1
    events.prace-ri.eu/event/875/
    Jun 25 10:00 to Jun 27 18:30
    We would like to invite you to the 33rd VI-HPS Tuning Workshop which
    will be held at Juelich Supercomputing Centre in Germany as part of the
    PRACE training centre curriculum.  This is the latest in a series of
    hands-on practical workshops given by tools developers for parallel
    application developers.  The Virtual Institute - High Productivity
    Supercomputing (VI-HPS) is an initiative promoting the development,
    integration and use of HPC tools: see www.vi-hps.org

    Participants are encouraged to bring their own parallel application
    codes to the workshop to analyze and tune their performance with the
    help of experts.  Analysis will focus on MPI and OpenMP, with optional
    additional use of OpenACC, OpenCL or CUDA.

    This workshop organized by VI-HPS and JSC as a PRACE training event will:


    give an overview of the VI-HPS programming tools suite
    explain the functionality of individual tools, and how to use them effectively
    offer hands-on experience and expert assistance using the tools


    The detailed program will be available on the VI-HPS training web site.

    Presentations and hands-on sessions are planned on the following topics


    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca automated trace analysis
    TAU performance system
    Vampir interactive trace analysis
    Extra-P automated performance modeling
    Paraver/Extrae/Dimemas trace analysis and performance prediction
    MAQAO performance analysis & optimisation
    MUST runtime error detection for MPI
    ARCHER runtime error detection for OpenMP
    JUBE script-based workflow execution environment
    ... and potentially others to be added


    A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.

    Prerequisites: Experience with MPI or OpenMP

    Application
    Registrations are only considered until 10 June 2019 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.

    Instructors: JSC staff members, members of the VI-HPS collaboration

    Contact
    For any questions concerning the course please send an e-mail to b.wylie@fz-juelich.de
    events.prace-ri.eu/event/827/
    Jun 24 9:00 to Jun 28 16:30
    OpenMP is the industry standard for shared-memory programming, which enables serial programs to be parallelised using compiler directives.This course is aimed at programmers seeking to deepen their understanding of OpenMP and explore some of its more recent and advanced features.

    This 3-day course will cover topics including nested parallelism, OpenMP tasks, the OpenMP memory model, performance tuning, hybrid OpenMP + MPI, OpenMP implementations, and new features in OpenMP 4.0/4.5. Hands-on practical programming exercises make up a significant, and integral, part of this course.

    Attendees should be familiar with the basics of OpenMP, including parallel regions, data scoping, work sharing directives and synchronisation constructs. Access will be given to appropriate hardware for all the exercises, although many of them can also be performed on a standard Linux laptop.

     

    Pre-course setup

    All attendees should bring their own wireless-enabled laptop set up with the standard software as detailed on our site www.archer.ac.uk/traini.....e.php. The course tutor will be able to assist with settings to connect on the day.

    Practical exercises will be done using a guest account on ARCHER. . You should also have a web browser, a pdf reader and a simple text editor.

     

    Timetable

    Day 1

    09:00 - 11:00  Lectures: OpenMP basics: Parallel regions, Worksharing, Synchronisation
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practical: Parallel regions
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: Tasks, Nested parallelism, Memory model  coherency, NUMA
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: Mandelbrot with nested loops, collapse, and tasks

    Day 2

    09:00 - 11:00  Lectures: Multicore and multithreaded CPUs, Caches, Cache  coherency, NUMA
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practicals: Streams, Coherency
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: OpenMP tips, tricks and pitfalls, Performance issues
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: MD tuning

    Day 3

    09:00 - 11:00  Lectures: OpenMP + MPI
    11:00 - 11:30  Coffee
    11:30 - 13:00  Practicals: OpenMP + MPI
    13:00 - 14:00  Lunch
    14:00 - 15:30  Lectures: OpenMP 4.0/4.5 features, target offload
    15:30 - 16:00  Tea
    16:00 - 17:00  Practicals: target offload

    Course Materials

    www.archer.ac.uk/traini.....php 

     

    Location

    www.manchester.ac.uk/d.....?id=1
    events.prace-ri.eu/event/875/
    Jun 25 10:00 to Jun 27 18:30
    Overview

    This course teaches performance engineering approaches on the compute node level. "Performance engineering" as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a "holistic" node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/842/
    Jun 27 9:00 to Jun 28 17:00
    We would like to invite you to the 33rd VI-HPS Tuning Workshop which
    will be held at Juelich Supercomputing Centre in Germany as part of the
    PRACE training centre curriculum.  This is the latest in a series of
    hands-on practical workshops given by tools developers for parallel
    application developers.  The Virtual Institute - High Productivity
    Supercomputing (VI-HPS) is an initiative promoting the development,
    integration and use of HPC tools: see www.vi-hps.org

    Participants are encouraged to bring their own parallel application
    codes to the workshop to analyze and tune their performance with the
    help of experts.  Analysis will focus on MPI and OpenMP, with optional
    additional use of OpenACC, OpenCL or CUDA.

    This workshop organized by VI-HPS and JSC as a PRACE training event will:


    give an overview of the VI-HPS programming tools suite
    explain the functionality of individual tools, and how to use them effectively
    offer hands-on experience and expert assistance using the tools


    The detailed program will be available on the VI-HPS training web site.

    Presentations and hands-on sessions are planned on the following topics


    Setting up, welcome and introduction
    Score-P instrumentation and measurement
    Scalasca automated trace analysis
    TAU performance system
    Vampir interactive trace analysis
    Extra-P automated performance modeling
    Paraver/Extrae/Dimemas trace analysis and performance prediction
    MAQAO performance analysis & optimisation
    MUST runtime error detection for MPI
    ARCHER runtime error detection for OpenMP
    JUBE script-based workflow execution environment
    ... and potentially others to be added


    A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.

    Prerequisites: Experience with MPI or OpenMP

    Application
    Registrations are only considered until 10 June 2019 due to available space, the maximal number of participants is limited. Applicants will be notified, whether they are accepted for participitation.

    Instructors: JSC staff members, members of the VI-HPS collaboration

    Contact
    For any questions concerning the course please send an e-mail to b.wylie@fz-juelich.de
    events.prace-ri.eu/event/827/
    Jun 24 9:00 to Jun 28 16:30
    Overview

    This course teaches performance engineering approaches on the compute node level. "Performance engineering" as we define it is more than employing tools to identify hotspots and bottlenecks. It is about developing a thorough understanding of the interactions between software and hardware. This process must start at the core, socket, and node level, where the code gets executed that does the actual computational work. Once the architectural requirements of a code are understood and correlated with performance measurements, the potential benefit of optimizations can often be predicted. We introduce a "holistic" node-level performance engineering strategy, apply it to different algorithms from computational science, and also show how an awareness of the performance features of an application may lead to notable reductions in power consumption.This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

    For further information and registration please visit the HLRS course page.
    events.prace-ri.eu/event/842/
    Jun 27 9:00 to Jun 28 17:00
    29
     
    30
     

     


    PTC events this month:

    June 2019
    Mon Tue Wed Thu Fri Sat Sun
     
    1
     
    2
     
    3
     
    4
     
    5
     
    6
     
    7
     
    8
     
    9
     
    10
     
    11
     
    12
     
    13
     
    14
     
    15
     
    16
     
    17
     
    18
     
    19
     
    20
     
    21
     
    22
     
    23
     
    24
     
    25
     
    26
     
    27
     
    28
     
    29
     
    30