Q&A: Richard Graham, Head of the Application Performance Tools Group at the Oak Ridge Leadership Computing Facility
Note: This article originally appeared in the November 21, 2011, issue of HPCwire
Oak Ridge National Laboratory’s (ORNL’s) National Center for Computational Sciences is a Department of Energy (DOE) supercomputing center that houses the Oak Ridge Leadership Computing Facility (OLCF) and the Jaguar supercomputer, a Cray XT5 capable of more than 2.3 petaflops, or 2.3 quadrillion calculations per second. In 2012 the OLCF will begin to upgrade Jaguar, boosting its computational ability by up to a thousandfold. The upgrade will result in Jaguar’s becoming Titan, a computer capable of 10 to 20 petaflops and the center’s next premier resource for scientific computing. Titan’s arrival will bring fundamental changes to the OLCF’s supercomputing operation, primarily due to the incorporation of hybrid computing architectures that feature both central and graphics processing units (CPUs and GPUs).
Richard Graham heads the OLCF’s Application Performance Tools Group, which identifies software tools and missing tool capabilities to help science and engineering researchers improve the performance of applications that run on leadership-class computers. The group’s focus is on four main tools: compilers, which transform software languages such as Fortran into serial-based languages—binaries—that a computer understands; debuggers, which help identify errors in users’ source codes; performance analysis tools, which help applications understand their performance characteristics; and communication libraries, which direct communications between different computation nodes in a computer.
Prior to joining ORNL, Graham served as the acting group leader of Los Alamos National Laboratory’s Advanced Computing Laboratory and cofounded its Open-MPI project, striving to universalize message passing interface software across multiple platforms. Graham also worked for Cray Research and SGI.
In this interview Graham discusses the challenges presented by new hybrid computer architectures such as Titan’s. His group’s goal is to make sure that the OLCF is prepared to offer researchers the most up-to-date and efficient tools possible to make effective use of a new high-performance computing (HPC) environment.
HPCwire: How do you assemble the tools to shift computing architectures?
Graham: First of all we need to determine the hardware characteristics because those ultimately determine what can be done and what the software can potentially do. That’s the first step, understanding the new hardware and how it’s different from previous hardware. Then we decide which tools—pieces of software that enable application scientists to do the work they want to carry out—are of interest to us, and whether these current tools are sufficient. And by sufficient I mean in the context of our production environment, Jaguar. If they are not, we need to understand if we can enhance the current tool set, or if we need to go out and see if there is something else out there. If there isn’t, we obviously need to figure out how to fill the gap. The first thing we try to do is decide if there is a starting point we can use, and if there is not, then we obviously need to figure out how to create one. That involves going and talking with vendors and universities, understanding their plans, and understanding what they currently have that could be of use to us.
In terms of computational characteristics, you need to have a lot of vector-like, available parallelism in your application and be able to do a lot of the same computations in parallel, so any code that has nice loops and a lack of data dependency between the loops is very well suited for hybrid architectures. You also need to understand how to map the parallelism to the underlying hardware. The big issue, though, is that moving the data from the CPU to the GPU takes a long time. Ideally you want to keep the GPU occupied [with large computation] to hide the cost of data transfer between the GPU and CPU. So you either keep data permanently on the GPU so you don’t have to transfer a lot of it, or else you have to have a sufficient amount of work to be able to keep the GPU busy and hide the cost of moving data onto the GPU. New data or work decomposition schemes need to be explored.
HPCwire: How will Titan’s architecture change the way supercomputers operate?
Graham: The big difference is that we have two very different computing capabilities on a single node comprised of an AMD CPU and an NVIDIA GPU. The CPU is for general-purpose calculations. We’re very familiar with CPUs in the sense that we know how to analyze what happens on them to a certain degree. Then you have a very different accelerator in the GPU that has the potential for very high performance but has less capability than the general-purpose CPU in the sort of operations it can perform. The GPU schedules operations in a certain way and tries to position itself to be able to run at parallel effectively. So the challenge is how to use both the CPU and GPU efficiently in a general-purpose computing environment.
From a tools perspective, the major difference is that there is a lot less support for GPU tools than there is for CPUs in the HPC environment, and even fewer that target both. This is because few tools have been ported to the GPU environment, and less detailed information is made available by the GPUs. There is “knitting” to the way applications tend to use these things; they use CPUs with GPUs as accelerators to do certain portions of the work, so the data from the two types of processors needs to be merged if you’re trying to look at overall utilization and get an overall picture of how the application is running.
HPCwire: What contributed to petascale success, and what is shaping the push to exascale?
Graham: I think the major tool that contributed to petascale success was having good programming models, languages, libraries, and optimizing compilers, which take an abstract programming language and turn it into a set of instructions for computers to understand. If you’re looking at it from a tools perspective, performance analysis tools are also needed, but without compilers we couldn’t run the codes as we are now. Debuggers are important, but up until a year ago, debuggers did not run at scale. That’s actually one of the achievements we’ve had on a project at the OLCF. With one of our partners, Allinea, we’ve really changed the debugging paradigm by scaling up a debugger called DDT. We’ve been able to basically debug at full scale on Jaguar, even though three years ago people claimed you couldn’t do parallel debugging beyond several hundred to several thousand processes. Now, my group routinely debugs parallel code at over 100,000 processes using DDT. It’s much more effective than trying to use the old techniques. No other debugger can even come close to DDT’s performance, so obviously it’s a hit with users.
As part of the OLCF 3 project, we’ve been working with different software vendors. One, CAPS Enterprise, is a compiler company out of France that produces the HMPP [hybrid multicore parallel programming] compiler, which targets GPUs. We’ve been working with them for two years now enhancing their compiler to meet our needs, and we’ve been very pleased with the partnership. The work has resulted in significant capabilities being added to the compiler that help us incrementally transition our existing applications to accelerator-based computers and has led to some nice performance enhancements. It is one of several compilers that we will support on the system. We have also been working on scalability.
As computer architectures get bigger, scalability becomes an issue. Another critical piece is the Vampir Suite of performance analysis tools. Those tools are coming out of the Technical University of Dresden, and they perform what is called trace-based performance analysis, which collects performance data in the context of the call stack, not only the program counter. The emphasis is on adding capabilities to simultaneously collect data from CPUs and GPUs, but they are also doing a lot of scalability work. They recently decided to work with Terry Jones, a member of my group, who helped in the context of another DOE-funded project to transfer data from hosts to collectors. Basically they’ve been able to run trace-based applications at 200,000 processes. The previous record was on the order of 100,000, and it was very slow.
Before this effort people didn’t really consider doing this type of analysis beyond maybe several thousand processes, so there has been a significant advance in capabilities. This group continues to work on making data collection more practical. We have also emphasized the integration among the different tools of the programming environment.
A common trait behind all of these three collaborations is that we went with companies that already had an existing product, so we weren’t starting from scratch. And because they were existing companies, the second thing is they already had a support infrastructure in place. The third thing is that they were very willing to include enhancements for our needs in their products, so we were really funding to improve their main product line, which is also very beneficial to us.
HPCwire: How will you help get users up to scale with Titan?
Graham: I think the first problem is in current compilers and runtime environments—things that allow users access to the system. Right now most of these components are very primitive, and for a lot of the codes, there’s a lot of code restructuring you have to do manually. The real need is for a set of compiler-based code-transformation tools that will simplify the process and automate as many of the transformations as possible. But before we get there, a big issue is the lack of widely accepted programming models to make this possible. There are some standardization efforts under way, but they’re far from completion, and we will have to see how users take to them. There are parallel languages that people can use, but they’re not widely used. Chapel is one that people keep pointing to, as it was developed in the context of high-performance computing. You also have Fortran trying to bring in some versions that could help to a certain degree.
Historically these shifts in computing are nothing new. This is the way it’s been for a long time, but I can remember the transition from vector processing to the sort of computing that we do now—parallel processing on microprocessors. It has taken about 10 years to make that transition, and by that I mean for a large body of code to run well. So it may take another 10 years for microprocessor-based architectures to fully transform into some sort of heterogeneous multicore computer system. It is not going to be pleasant. It’s going to be very expensive, and they really need something to help in that process.
Thankfully there is a research community out there that is interested in looking at these sorts of problems. People have been thinking about these types of issues, and so there’s definitely a demand to overcome the obstacles. This is not something that will be done overnight, and it is not just a technical challenge. You also have to have application developers use what is being produced. I’m sure there are different views on how to go about this. I think there are good ideas out there. It’s just an issue of people having the time to do something with the ideas and then have those ideas become things that a commercial company is willing to support, because without that, there is just another set of nice ideas that never influences the community.
—by Eric Gedenk, OLCF science writer, for HPCwire