The Oak Ridge Leadership Computing Facility (OLCF) has helped lead many developments in parallel programming during the operation of its 27-petaflop Titan supercomputer, the first of its magnitude to use GPU accelerators for scientific computing. The OLCF is a US Department of Energy (DOE) Office of Science User Facility located at DOE’s Oak Ridge National Laboratory.
One of the key tools in an OLCF user’s toolbox is OpenACC—an application programming interface that integrates accelerators with a host CPU. To make the most of OpenACC, OLCF staff routinely participate in weekly teleconferences and an annual OpenACC standards meeting to discuss lessons learned and propose improvements based on user feedback.
“OpenACC is a user-driven specification with a very active and engaged community that is backed by vendor support,” said Oscar Hernandez, OLCF tools developer. “Also, OpenACC enables us to express OLCF’s programming model requirements to larger accelerator programming specification bodies like OpenMP.”
Drawing members from the research and vendor communities, the 2016 OpenACC standards organization meeting was held in Indianapolis in September and resulted in a list of new features for the next version, scheduled for release in mid-2017. OpenACC 2.6 will address issues users encounter when working on current systems but will also foreshadow changes in DOE’s upcoming CORAL supercomputers. CORAL (the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore National Laboratories) is a DOE award to stand up three systems that will outperform current DOE leadership computers by 5 to 10 times.
“The new version of OpenACC adds support for manual deep copy and several new user-requested behaviors, including initial support for partially shared memory between the host and accelerators,” said Michael Wolfe, OpenACC language chair.
After several years of discussion among members, version 2.6 will support manual deep copy, a feature that will make it easier to move linked data structures back and forth from GPUs to host CPUs. Although many users want to include automatic deep copy, also called true deep copy, members agreed the user community needs to explore different methods to express it in the OpenACC directives and run-time programming interfaces before making it available.
“We’re hoping that true deep copy will be available in version 3 in time for the upcoming CORAL machines,” Wolfe said, referring in part to Summit, the OLCF’s next leadership system scheduled to go online in 2018.
Also being implemented is another longtime user request for array reductions, which will provide a shortcut for accumulating values into elements of an array (a set of similar objects). A device query routine will also be added to enable an application to detect which accelerators are available for use on each node.
“One of the things we talked about is how we will prepare for machines like Summit,” Wolfe said. “These new supercomputers will have exposed memory hierarchies.”
Opposed to Titan’s cache hierarchy that splits the memory between the CPU and GPU on a node, exposed memory hierarchy will allow shared memory that must be managed by the application or run-time software. OpenACC 2.6 will allow devices to use partially shared memory.
Currently, a similar feature for OLCF users is available through NVIDIA’s CUDA unified memory, which is a NVIDIA-specific solution that has shown promise in reducing time to solution.
Other notable features of version 2.6 include an error callback routine, which will notify OpenACC run time if one node experiences an error so the application can quit or recover before continuing to run, and a new “no_create” data clause that does not automatically create data when no data is present.
Users can learn more about version 2.6 at OpenACC.org.
Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.