CUDA Shared Memory
CUDA Shared Memory
Wednesday, February 19, 2020
On-site participation will be held at the following sites:
|ORNL||Building 5100, Room 128 (JICS Auditorium)||1:00 PM – 3:00 PM (ET)|
|NERSC||Building 59 (Shyh Wang Hall), Room 4102||10:00 AM – 12:00 PM (PT)|
On Wednesday, February 19, 2020, NVIDIA will present part 2 of a 9-part CUDA Training Series titled “CUDA Shared Memory”.
Shared memory is extremely fast, user managed, on-chip memory that can be used to share data between threads within a thread block. This can be used to manage data caches, speed up high-performance cooperative parallel algorithms, and facilitate global memory coalescing in cases where it would otherwise not be possible. This module will show you how to allocate and utilize shared memory on the GPU. After the presentation, there will be a hands-on session where in-person participants can complete example exercises meant to reinforce the presented concepts. While the exercises will be made available to all participants (both in-person and remote), remote participants will not be supported during the hands-on sessions.
OLCF and NERSC will both be holding in-person events for each part of the series, where participants can watch the presentations and get help from experts during the hands-on sessions. In-person participants without current Summit or Cori-GPU access will be given temporary accounts to work on the examples.
NOTE: The deadline for registration is February 12, 2020.
Remote participants can watch the presentations via web broadcast and will have access to the training exercises, but temporary access to the compute systems will be limited as follows:
- Current NERSC users will have Cori-GPU access temporarily added to their accounts.
- Temporary Summit access will not be available for remote participants.
NOTE: Registration is required for both in-person and remote participation. To register, please submit the form below.
If you have any questions, please contact Tom Papatheodore (email@example.com).