User support specialist Tom Papatheodore introduces staff, students to CUDA

A recent OLCF-hosted workshop, “Introduction to CUDA C/C++,” gave 40 students, interns, and researchers at ORNL a taste of lower-level programming on a GPU architecture.

The 18,688 NVIDIA Tesla GPU accelerators in Titan, the Oak Ridge Leadership Computing Facility’s (OLCF’s) flagship supercomputer, can greatly boost code performance. Researchers interested in using Titan, which is capable of 27 petaflops, benefit from learning how to use the supercomputer’s hybrid CPU–GPU architecture.

On June 19, the OLCF’s new high-performance computing programmer and user support specialist, Tom Papatheodore, led an OLCF-hosted workshop to demonstrate just how simple coding on Titan’s GPUs can be. The workshop, “Introduction to CUDA C/C++,” gave 40 students, interns, and researchers at the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) a taste of lower-level programming on a GPU architecture. The OLCF is a DOE Office of Science User Facility located at ORNL.

Using Chester, a computer with an architecture similar to that of Titan, attendees learned how to turn C programs into CUDA programs. Created by NVIDIA, CUDA is a set of extensions to the standard C programming language. In contrast to directive-based approaches to GPU programming, such as OpenACC, CUDA allows developers to explicitly program GPUs.

“With directive-based approaches, developers give hints to the compilers throughout their programs, allowing the compilers to create the GPU code,” Papatheodore said. “With CUDA, you’re closer to the hardware, so you can be more explicit about how you map your problem to the GPU.” Compilers are software programs that translate programming languages into lower-level languages that a computer can understand.

Although the workshop focused on simple linear algebra examples—such as vector addition, an element-by-element addition of two vectors—researchers can use CUDA in their scientific applications by performing either the same instructions on large amounts of data (data parallelism) or different tasks on separate parts of the GPUs (task parallelism).

The workshop was geared toward anyone with a willingness to learn how to use CUDA, and participants were assumed to possess only basic programming experience. In fact, more than half of the attendees were interns and University of Tennessee–Knoxville (UT) students who had little to no experience with GPU programming. One such student, UT graduate student Ryan Landfield, attended to gain a basic understanding of GPU programming methods; this foundational knowledge could be useful in accelerating future astrophysical simulations, said Landfield, who works with Michael Zingale, a Stony Brook University researcher who was recently elected a member of the OLCF User Group executive board.

“The workshop was useful because it laid the groundwork for understanding how computational tasks can be divided efficiently to maximize GPU performance and thereby accelerate a given code,” Landfield said. “It showed me how the fundamentals of GPU programming work, not only so that I can consider using them in my future code but also so that I might understand other codes that already use them and potentially incorporate those codes into my work.”

Landfield is currently working with Zingale’s project, “Approaching Exascale Models of Astrophysical Explosions.” The project revolves around studying stellar explosions using the simulation codes Maestro and Castro, which were designed for modeling these crucial phenomena. Landfield uses Titan to perform 2-D core-collapse supernovae simulations to demonstrate how different astrophysical equations of state—how the stellar material reacts to changes in density and heating—can affect explosion observables.

The OLCF will continue hosting similar workshops based on future demands. “We’ll certainly offer this type of workshop again,” Papatheodore said. “As new students, postdocs, and staff members continue to join our facility, we need to ensure they can take advantage of the resources available to them.”

ORNL is supported by the DOE Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit