CUDA® is a parallel computing platform and programming model that extends C++ to allow developers to program GPUs with a familiar programming language and simple APIs.

NVIDIA will present a 9-part CUDA training series intended to help new and existing GPU programmers understand the main concepts of the CUDA platform and its programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture. The list of topics is shown in the table below. Please click the individual event links for more details or to register.

Please note that participants will register for each part of the series individually.

NOTE: The format of these events have been changed to online only. NVIDIA will present remotely for the first ~1 hour and the remote connection will be left open for the hands-on session, where representatives from OLCF, NERSC, and NVIDIA will be available to support participants.

Remote Participation
Remote participants can watch the presentations via web broadcast and will have access to the training exercises, but temporary access to the compute systems will be limited as follows:

  • Current NERSC users will have Cori-GPU access temporarily added to their accounts.
  • Temporary Summit access will not be available for remote participants.

  • If you have any questions about this training series, please contact Tom Papatheodore ( for more information.

    # Topic Date
    1 Introduction to CUDA C++ Wednesday, January 15
    2 CUDA Shared Memory Wednesday, February 19
    3 Fundamental CUDA Optimization (Part 1) Wednesday, March 18
    4 Fundamental CUDA Optimization (Part 2) Thursday, April 16
    5 Atomics, Reductions, and Warp Shuffle Wednesday, May 13
    6 Managed Memory Thursday, June 18
    7 CUDA Concurrency Tuesday, July 21
    8 GPU Performance Analysis Tuesday, August 18
    9 Cooperative Groups Thursday, September 17