Technology - Written by on January 13, 2015

OLCF Emphasizes Importance, Furthers Development of OpenACC Standard

Tags: , , ,

OLCF Director of Science Jack Wells was the keynote speaker at an OpenACC Workshop held at the University of Houston. Photo Credit: Md. Abdullah Shahneous Bari Sayket, a Ph.D. student with Dr. Barbara Chapman's HPCTools Research Group;  Andrew Knotts, Communications Coordinator, CACDS, UH

OLCF Director of Science Jack Wells was the keynote speaker at an OpenACC Workshop held at the University of Houston.
Photo Credit: Md. Abdullah Shahneous Bari Sayket, a Ph.D. student with Dr. Barbara Chapman’s HPCTools Research Group; Andrew Knotts, Communications Coordinator, CACDS, UH

Wells serves as keynote speaker at Houston workshop

The Titan supercomputer and the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy Office of Science User Facility, are playing a large role in advancing the OpenACC compiler standard. It is precisely because of this investment that Jack Wells, OLCF director of science, was recently invited to be the keynote speaker of a workshop at the University of Houston.

Barbara Chapman, a professor in the Department of Computer Science at the University of Houston and Director of the Center for Advanced Computing and Data Systems, organized the event to highlight opportunities for the oil and gas industry to make use of directives based compiler technology.

“Because the Titan project and our laboratory have taken a leading role in advancing the OpenACC standard, she was interested in hearing about what we’re doing,” Wells said. “OpenACC is a directive based API that simplifies the programming of accelerator-based systems through the use of annotations in the source code, enabling application developers to easily take advantage of the compute power of accelerators to drive advances in science, engineering, and industry.”

OpenACC makes it possible to specify the regions of the code to accelerate by using directives to express the parallelism of the code, the data motion of data (to and from the accelerator) and required synchronizations, without the development of specific or low-level accelerator code (such as CUDA, OpenCL, etc.).

Wells says the process is like writing little notes or specifying annotations in the source code to the compiler: “These are comments that one adds to the source code to give the compiler sophisticated hints as to how it should parallelize certain sections of one’s code and offload it to the GPU. I say ‘comments’ because if one is running on a computer that doesn’t have a GPU accelerator, then these instructions are just that—comments in the code. They’re not interpreted. But with an OpenACC compiler, they are interpreted as instructions to the compiler.”

What this means is if someone is running a code on a machine that does not have a GPU-aware compiler, then the machine will ignore the comments and run the code on the CPUs. If, however, the system being used does have the compiler, it will recognize the command to move that section of the code to the GPU.

Wells explains, “Some codes are so complex that rewriting all the code to get ready for GPUs is a little bit of a daunting task. OpenACC gives an approach that can be incremental in that you can work on one segment or one routine at a time. That’s an important role for OpenACC, and I hope that it will help people overcome the barrier to advancing work on their codes to get ready for these modern, energy-efficient architectures.”

Fernanda Foertter, a high performance computing user specialist on the User Assistance Team at the OLCF, emphasizes that OpenACC makes GPUs more accessible to users who are not yet comfortable with these new systems or who prefer not to meddle too much with their source code.

“This is a priority for people who just want to try it and see if it benefits them,” Foertter said. “For a lot of people, it’s a quick on-ramp.”

In an era of rapidly evolving technology, this capability is incredibly important. It has become more important than ever to encourage users to embrace these new platforms. OpenACC is an especially useful way to do that because it’s an open, nonproprietary standard. It can run on Titan, the OLCF’s current supercomputer built by Cray, as well as on the next supercomputer, Summit, a recently announced hybrid system that will be built by IBM and NVIDIA and delivered in 2018 to the OLCF at DOE’s Oak Ridge National Laboratory.

“We understand that users run wherever they can. They run in our center, they would run at other centers with diverse architecture if they have the chance, and so the codes need to be portable and performant on all those other machines, and the OpenACC standard gives one way of accomplishing that,” Wells said.

It is worth noting that OpenACC is not the only available compiler directive technology for accelerator offload. In fact, OpenACC was created as an offshoot of an API called OpenMP, which initially provided directives for shared memory programming and more recently has targeted accelerators. While both have similar functions, there are key differences; the main distinction is the descriptive nature of OpenACC, which allows the compilers to generate and optimize code speed for different types of accelerators.

The relationship between OpenACC and OpenMP is not solely a competitive one. The idea is that the two standards will continue to inform one another, and, eventually, merge. For now, though, OpenACC is ahead of the curve on accelerators, and some of its ideas will be used in future releases of OpenMP.

“What has allowed OpenACC to flourish with us, and the reason we put effort into it, is that it has enabled us to address some of the issues that our users are seeing, and it continues to do so in a quicker manner,” Foertter said. “OpenACC has a tighter connection to applications and is more agile on its releases and implementations.”

Foertter is one of the members of the OpenACC Standards Board, which also includes Oscar Hernandez of the Computer Science Research Group in OLCF’s Computer Science and Mathematics Division. “One of the things we do here a lot at OpenACC is that we are very tied to the applications’ needs,” Hernandez said. “I constantly meet with the application people to gather what are the challenges that they have—for example, when they start to port applications to accelerators—and I bring those challenges to the Standards Committee, where we strive to develop a solution to address them.”

Although OpenACC might not last forever in its current form, Wells emphasizes the vital role it plays in the development of a strong and diverse base of users who can use their code on any number of hybrid systems.

The OLCF recently put this idea into practice at a weeklong OpenACC hackathon event that successfully paired scientists and computational experts with a goal of porting their code to Titan’s GPUs without significant performance penalties.

“The message I want to get across is that OpenACC and directives-based compiler technology are important today and for the future of our center,” Wells said. “We are investing, and we will continue to invest, in the development of this technology, prioritizing it with our research and development programs and our vendors because this is one way to make programming these complex machines more manageable, and potentially, both high performing and portable to other architectures.” —Christie Thiessen

Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.