CUDA® is a parallel computing platform and programming model that extends C++ to allow developers to program GPUs with a familiar programming language and simple APIs.
NVIDIA will present a 9-part CUDA training series intended to help new and existing GPU programmers understand the main concepts of the CUDA platform and its programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture (for in-person participants) or on your own (for remote participants). This list of topics is shown in the table below. Please click the individual event links for more details or to register.
NOTE: Participants will register for each part of the series individually.
OLCF and NERSC will both be holding in-person events for each part of the series, where participants can watch the presentations and get help from experts during the hands-on sessions. In-person participants without current Summit or Cori-GPU access will be given temporary accounts to work on the examples.
Remote participants can watch the presentations via web broadcast and will have access to the training exercises, but temporary access to the compute systems will be limited as follows:
If you have any questions about this training series, please contact Tom Papatheodore (email@example.com) for more information.
|1||Introduction to CUDA C++||Wednesday, January 15|
|2||CUDA Shared Memory||Wednesday, February 19|
|3||Fundamental CUDA Optimization (Part 1)||Wednesday, March 18|
|4||Fundamental CUDA Optimization (Part 2)||Thursday, April 16|
|5||Atomics, Reductions, and Warp Shuffle||Wednesday, May 13|
|8||GPU Performance Analysis||TBD|