Performance Portability Training Series: RAJA
Performance Portability Training Series: RAJA
October 10, 2023
12:00 PM – 3:00 PM (EST)
Virtual via Zoom
Contact: John K. Holmen ([email protected]).
Presenter: Robert Chen, Lawrence Livermore National Laboratory ([email protected])
This session, offered by OLCF, NERSC, and the RAJA team, is part of the Performance Portability Training Series.
Overview
Primarily developed at Lawrence Livermore National Laboratory (LLNL), RAJA is a C++ library offering software abstractions that enable architecture and programming model portability for HPC application codes. RAJA offers portable, parallel loop execution by providing building blocks that extend the generally-accepted parallel for idiom. RAJA’s main goals are (1) to enable application portability with manageable disruption to existing code and (2) to achieve performance comparable to direct use of programming models such as OpenMP, CUDA, etc.
This is a 1-part session that will allow participants to learn from and interact directly with RAJA team members. The session will give a general overview of RAJA and cover the basics of using RAJA abstractions to offload work to the GPUs. Throughout the session, a variety of quiz-like puzzles will be used to engage the audience and reinforce concepts.
Target Audience
The target audience for this event are NERSC Perlmutter users and OLCF Frontier users.
[tw-toggle title=”Registration”]
This event is in the past. Thank you for participating.
[/tw-toggle] [tw-toggle title=”Presentation”] (slides | recording)[/tw-toggle] [tw-toggle title=”Survey”]
This event is past. Thank you for participating.
[/tw-toggle]If you have any questions about this training series, please contact John Holmen ([email protected]) for more information.