OLCF has Big Presence in the Big Easy at SC14
From Awards to Finalists, OLCF Makes Mark in NOLA
The 26th annual SC14 supercomputing conference wrapped up just in time for Thanksgiving. Taking place November 16–21 in New Orleans, SC14 brought professionals from the world’s leading supercomputing centers together with high-performance computing users, developers, and sponsors from academia, industry, and government laboratories. The theme for the SC14 conference was “HPC Matters.”
Members of the Oak Ridge Leadership Facility (OLCF), located at the Department of Energy’s Oak Ridge National Laboratory (ORNL), were active participants in the event.
Here are a few of the week’s highlights for the OLCF, a DOE Office of Science User Facility:
OLCF Intern Sean McDaniel took home the SC Undergraduate Research Award for his poster “Comparing Decoupled I/O Kernels Versus Real Traces in the I/O Analysis of the HACC Scientific Applications on Large-Scale Systems.” McDaniel spent last summer on OLCF’s campus working on kernel extractions. Hai Ah Nam, computational scientist for OLCF and one of McDaniel’s mentors, said this poster was an extension of that research.
“One of the questions that came up during the project was, ‘Once you extract a kernel, how well does it actually mimic the original?’” McDaniel said. “Because, in that process of trying to make it more simplified and faster and easier to understand, does something get lost in the translation?”
McDaniel’s work did, indeed, find that there is an extracted kernel that is not a good representation of the application. His research will be useful to future teams working on extracting smaller, faster-running kernels that mimic real-world applications designed to test supercomputers while still maintaining accuracy.
Best Paper Finalist
Members of the OLCF presented several papers, including “Best Practices and Lessons Learned from Deploying and Operating Large-Scale Data-Centric Parallel File Systems,” which was nominated as a Best Paper Finalist.
The paper provides an account of the experience and lessons learned in acquiring, deploying, and operating OLCF’s large, parallel file systems. These systems support computational, data analysis, and visualization platforms and serve as test platforms for new file system features and input/output benchmarks. OLCF continues to hone operating strategies in response to rapidly changing technologies and user demands. During this process, OLCF has acquired significant expertise in the areas of data storage systems, file system software, technology evaluation, benchmarking, and procurement practices that could be useful to the wider high-performance computing (HPC) community.
Two OLCF partners received HPC Innovation Excellence awards from the International Data Corporation (IDC) for their research using OLCF computing resources. The award, which recognizes outstanding achievements with supercomputing resources, was created to highlight successful collaborations between science and industry. Teams from Colorado-based Tech-X Corporation and North Carolina State University used OLCF resources for groundbreaking research that ultimately led to the awards.
Researchers from Tech-X were honored for their work simulating interactions between plasma and electromagnetic waves in a fusion reactor. Fusion has high prospects as a clean and virtually inexhaustible source of energy, and scientists from all over the world are developing an understanding of the principles underlying sustainable laboratory fusion reactions.
A team led by North Carolina State University professor and researcher Igor Bolotnov won an award for its simulations of “bubbly turbulence” in nuclear power plants. The most common nuclear reactors—light water reactors—create power by using nuclear fuel rods to heat water. Understanding the interaction between the fuel rod and water as it changes from a liquid to a gas is important for both power plant efficiency and safety.
Gordon Bell Prize
One of the Gordon Bell Prize nominees was “24.77 Pflops on a Gravitational Tree-Code to Simulate the Milky Way Galaxy with 18600 GPUs.” A 51 billion particle achieved parallel efficiency above 95 percent on the Piz Daint supercomputer at the Swiss National Computing Centre. However, the highest performance was achieved on ORNL’s Titan with a 242 billion particle Milky Way Model. The Titan demo harnessed 18,600 GPUs and reached a sustained GPU performance of 33.49 petaflops and application performance of 24.77 petaflops.
HPC facilities face the challenge of serving a diverse user base with different skill levels and needs. As centers worldwide install increasingly heterogeneous architectures, training will be even more important and in greater demand. Fernanda Foertter, an HPC User Support Specialist, led this workshop on best practices to deliver HPC training programs that decrease time spent on rudimentary assistance, increase staff–user interaction, and allow for more efficient use of resources. The workshop featured 12 speakers from academic institutions and national labs around the world, each of whom gave a lightning talk about how their HPC center approaches training.
“HPC training is flexible enough that each facility can really mold it to their specific needs,” Foertter said. “We should seek to improve on each other’s work instead of recreating the wheel.”
Programming with Directives
The nodes of many current HPC platforms are equipped with hardware accelerators that offer high performance. To enable their use in scientific application codes without loss of programmer productivity, several recent efforts have been devoted to providing directive-based interfaces that can transform source code without manual changes. Tools Developer Oscar Hernandez led a workshop to bring together the user and tools communities to share their knowledge and experiences using directives to program accelerators.
Real-Time Analysis of Light Source Analysis –
One of the most computationally demanding problems concerning DOE’s advanced light sources is simulation of the experimental X-ray scattering process and generation of scattering patterns. A project conducted at Lawrence Berkeley National Lab, entitled “Enabling Next-Generation Light Source Data Analysis through Massive Parallelism,” resulted in a high-performance software, HipGISAXS, that utilized massively parallel systems, such as Titan, to help solve this program.
For the project, a device similar to an inkjet printer printed photovoltaic materials on a substrate while x-ray scattering was conducted to study the internal structure of the polymer molecules as they were laid down. In real time, this data was then sent to OLCF over the high-speed Energy Sciences Network (ESnet5) where Titan’s multi-core GPU systems ran the necessary data inversion and analysis and returned the results to LBNL.
“We were the unique place for this large volume of data analysis because the HipGISAXS data inversion requires Titan’s large number of GPU-accelerated nodes,” explained OLCF’s Director of Science, Jack Wells. “That’s why it made sense to move the data from California to Tennessee and the coupled performance of ESNet5 and Titan made it possible.”
OLCF’s involvement in this research was showcased during a data demonstration.
OLCF and ORNL staff members participated in the SC14 Job/Opportunity Fair to recruit interns, post-docs, and full-time staff. They met with more than 80 students from around the world.
Ashley Barker, group leader for the User Assistance and Outreach Group, said participating in the Job/Opportunity Fair was “very exciting to get to show off research opportunities and connect with promising new talent.” —Christie Thiessen
Oak Ridge National Laboratory is supported by the US Department of Energy’s Office of Science. The single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.