CUDA® is a parallel computing platform and programming model that extends C++ to allow developers to program GPUs with a familiar programming language and simple APIs.
NVIDIA will present a 9-part CUDA training series intended to help new and existing GPU programmers understand the main concepts of the CUDA platform and its programming model. Each part will include a 1-hour presentation and example exercises. The exercises are meant to reinforce the material from the presentation and can be completed during a 1-hour hands-on session following each lecture. The list of topics is shown in the table below. Please click the individual event links for more details or to register.
Please note that participants will register for each part of the series individually.
Remote participants can watch the presentations via web broadcast and will have access to the training exercises, but temporary access to the compute systems will be limited as follows:
If you have any questions about this training series, please contact Tom Papatheodore (firstname.lastname@example.org) for more information.
|1||Introduction to CUDA C++||Wednesday, January 15|
|2||CUDA Shared Memory||Wednesday, February 19|
|3||Fundamental CUDA Optimization (Part 1)||Wednesday, March 18|
|4||Fundamental CUDA Optimization (Part 2)||Thursday, April 16|
|5||Atomics, Reductions, and Warp Shuffle||Wednesday, May 13|
|6||Managed Memory||Thursday, June 18|
|7||CUDA Concurrency||Tuesday, July 21|
|8||GPU Performance Analysis||TBD|