Fundamental CUDA Optimization (Part 1)
Fundamental CUDA Optimization (Part 1)
Wednesday, March 18, 2020
NOTE: The format of this event has been changed to online only. NVIDIA will present via WebEx for the first ~1 hour and the WebEx session will be left open for the hands-on session, where representatives from OLCF, NERSC, and NVIDIA will be available to support participants.
ORNL | Remote Participation Only | 1:00 PM – 3:00 PM (ET) |
NERSC | Remote Participation Only | 10:00 AM – 12:00 PM (PT) |
On Wednesday, March 18, 2020, NVIDIA will present part 3 of a 9-part CUDA Training Series titled “Fundamental CUDA Optimization (Part 1)”.
This part of the series is aimed at basic optimization principles. We will introduce users to optimization strategies related to kernel launch configurations, GPU latency hiding, global memory throughput, and shared memory applicability. After the presentation, there will be a hands-on session where in-person participants can complete example exercises meant to reinforce the presented concepts.
Remote Participation
Remote participants can watch the presentations via web broadcast and will have access to the training exercises, but temporary access to the compute systems will be limited as follows:
- Current NERSC users will have Cori-GPU access temporarily added to their accounts.
- Temporary Summit access will not be available for remote participants.
Please see the “Remote Participation” tab below for connection details.
If you have any questions, please contact Tom Papatheodore ([email protected]).
[tw-tabs tab1=”Registration” tab2=”Remote Participation” tab3=”Presentation” tab4=”Exercises” tab5=”Survey“]
[tw-tab]
[/tw-tab]
[tw-tab]
Webex is having technical issues so we are switching to this Zoom meeting hosted by NERSC: https://lbnl.zoom.us/j/5104865180
[/tw-tab] [tw-tab] (slides | recording)[/tw-tab] [tw-tab] The example exercises for this module can be found in the exercises/hw3 folder of the following GitHub repo: https://github.com/olcf/cuda-training-series.
[/tw-tab] [tw-tab] [/tw-tab] [/tw-tabs]