Preparing for Frontier: Using HIP and GPU Libraries with OpenMP
Using HIP and GPU Libraries with OpenMP
December 14, 2022
1:00 – 2:30 PM (ET)
Virtual via Zoom
Contact: Suzanne Parete-Koon (firstname.lastname@example.org)
This session is part of the OLCF’s Preparing for Frontier training series. Please click this link to visit the main page to see other sessions in the series.
This training is designed for Fortran and C/C++ users who are using OpenMP or considering OpenMP for their applications on Frontier. The focus will be showing how one can augment an OpenMP program with GPU kernels and libraries written in HIP. We build on the previous OpenMP trainings in OLCF’s “Preparing for Frontier” series OpenMP Offload Basics and OpenMp Optimization and Data Movement.
For Fortran OpenMP programmers, we will demonstrate how to build or use the C-interoperability interfaces to launch HIP kernels and call ROCm libraries (for example, rocBLAS, rocFFT, etc.), while using OpenMP to manage data allocation and movement. In this training, we will walk through a concrete example of a Fortran + OpenMP program that exhibits all of the above-mentioned features (managing data with OpenMP, offloading OpenMP kernel, launching HIP kernel, and calling a ROCm library). A similar example program in C will also be presented.
Additionally, we will show how to use the hardware-supported GPU-aware MPI feature from OpenMP to enable faster MPI communications on Frontier.
Users of OLCF, ALCF and NERSC are welcome to attend this training.
The techniques described in this training apply to the corresponding CUDA libraries on Summit and NERSC Perlmutter, so users may immediately apply the knowledge gained from the lessons.
All participants must register to attend this event. Please do so by clicking the dropdown below and submit the registration form.