New Kepler graphics processing units to accelerate science on new hybrid supercomputer
An international gathering of researchers, computer scientists, and engineers converged on San Jose, California from May 14–17 to share their experiences using the newest technology in HPC—blistering fast graphics processing units (GPUs).
ORNL, home to a GPU-accelerated supercomputer known as Titan, has partnered with technology company NVIDIA, host of the annual GPU Technology Conference (GTC) and inventor of the GPU, to use this new technology for next-generation scientific challenges.
“GTC is a one-of-a-kind event where scientists and developers from various disciplines can learn from each other,” said Andy Walsh, director of marketing for GPU computing at NVIDIA. “Problems solved and insights gained from GPU acceleration of scientific codes are surprisingly applicable from one research field to another. And, enabling researchers from around the world to learn about early results and potential new breakthroughs on the Titan supercomputer was an incredibly valuable addition to the conference.”
Traditionally, HPC has relied on increasing the number of central processing units (CPUs) to increase computation speed. But with the advent of high-performance, energy efficient GPUs, researchers are now able to process larger numbers of parallel tasks much faster and more efficiently, while allowing the CPUs to focus on more complex calculations.
Several ORNL staff members presented at the conference. Jack Wells, director of science at the Oak Ridge Leadership Computing Facility, which manages Titan, chaired a session about his center’s landmark system which will employ NVIDIA GPU technology to deliver important scientific insights immediately. When Titan is fully operational in 2013, it is expected to reach a peak of 20 petaflops, or 20 thousand trillion calculations per second.
The session, entitled “GPU-accelerated science on Titan: Tapping into the world’s preeminent GPU supercomputer to achieve better science,” included speakers from various institutions and scientific disciplines discussing how their research stood to gain from the new hybrid architectures, or those that feature both traditional CPUs and accelerating GPUs.
“In our session about scalable science on Titan, we focused on how scientists and engineers are moving scalable simulations and applications forward on hybrid supercomputers,” Wells said. “I think it was a big success with an excellent set of presentations, strong audience participation, and attendance.”
Jeffrey Vetter, an ORNL computer scientist and Georgia Tech faculty member, was a panel member on the ‘Exascaling Your Application’ panel. “The community response to GPUs has been very encouraging, and our panel session attendees were interested in how best to prepare their applications for the coming changes in computing technology as we move forward to the exascale,” Vetter said.
ORNL molecular dynamics researcher Loukas Petridis felt the conference not only had great information pertaining to GPU technology, but also allowed a chance for researchers to talk about their specific work within the context of the next hybrid computing environment. “I thoroughly enjoyed the conference,” Petridis said. “It provided a unique opportunity to get up to date with developments in use of GPUs in molecular dynamics simulations, the field I work in. GPUs are not regularly discussed at other molecular dynamics conferences, therefore GTC helped significantly in bringing the community together.” —by Eric Gedenk