Skip to main content

Five Gordon Bell Finalists Credit Summit for Vanguard Computational Science

Ambitious supercomputers attract ambitious users. After debuting as the world’s fastest supercomputer in June, the Oak Ridge Leadership Computing Facility’s (OLCF’s) 200-petaflop Summit is already demonstrating its utility for solving complex computational challenges with unprecedented speed.

A recent announcement by the Association for Computing Machinery lists five Summit users among the finalists for the prestigious Gordon Bell Prize, one of the top annual honors in supercomputing. The teams each used the IBM AC922 system to explore a large-scale problem in their respective domains. The OLCF is a US Department of Energy (DOE) Office of Science User Facility located at DOE’s Oak Ridge National Laboratory (ORNL).

The Gordon Bell Prize is awarded each year to recognize outstanding achievements in high-performance computing (HPC), with an emphasis on rewarding innovations in science applications, engineering, and large-scale data analytics.

The science problems these finalists tackled represent the broad range of challenges suitable for a state-of-the-art GPU-accelerated supercomputer and include innovations in machine learning, data science, and traditional modeling and simulation.

“These Gordon Bell finalists are an encouraging preview of the challenges users will be able to tackle on Summit when formal allocation programs begin in 2019,” said OLCF Director of Science Jack Wells. “Of particular note is the system’s ability to handle large volumes of data at scale, whether that be processing and analyzing experimental data or training artificial intelligence software to carry out specialized tasks.”

The Gordon Bell Prize winner will be announced at the 2018 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC18) in Dallas in November. Finalists who used Summit in their research are the following:

  • An ORNL team led by computational systems biologist Dan Jacobson and OLCF computational scientist Wayne Joubert that developed a genomics algorithm capable of using mixed-precision arithmetic to attain exascale speeds. On Summit, the team’s Combinatorial Metrics application achieved a peak throughput of 2.36 exaops—or 2.36 billion billion calculations per second, the fastest science application ever reported. Jacobson’s work compares genetic variations within a population to uncover hidden networks of genes that contribute to complex traits. One condition Jacobson’s team is studying is opioid addiction, which has been linked to the deaths of more than 49,000 people in the United States in 2017.
  • A team from the University of Tokyo led by associate professor Tsuyoshi Ichimura that applied artificial intelligence (AI) and mixed-precision arithmetic to accelerate the simulation of earthquake physics in urban environments. As cities continue to grow, preparedness and improved understanding of ground-shaking’s effects on buildings and urban infrastructure become increasingly important. On Summit, the Tokyo team expanded on its 2014 algorithm, which was also a Gordon Bell Finalist, to achieve a fourfold speedup and to couple the shaking of ground and urban structures during large earthquakes into the same simulation.
  • A Lawrence Berkeley National Laboratory-led collaboration that trained a deep neural network to identify extreme weather patterns from high-resolution climate simulations. The team, led by Berkeley data scientist Prabhat, plans to use the AI software to predict how extreme weather is likely to change in the future. By tapping into the specialized tensor cores built into Summit’s NVIDIA GPUs at scale, the Berkeley team achieved a peak performance of 1.13 exaops, the fastest deep-learning algorithm yet reported. Though the team applied its work to climate science, many of its innovations can be adapted for other deep-learning applications.
  • An ORNL team led by data scientist Robert Patton that scaled a deep-learning technique on Summit to produce intelligent software that can automatically identify materials’ atomic-level information from electron microscopy data. With advanced microscopes capable of producing hundreds of images per day, real-time feedback supplied by AI could give scientists the ability to fabricate materials at the atomic level. Scaled across 4,200 nodes, the team’s MENNDL algorithm achieved a speed of 152.5 petaflops with an estimated performance rate of 167 petaflops across the whole machine.
  • A team from Lawrence Berkeley and Lawrence Livermore National Laboratories led by physicists André Walker-Loud and Pavlos Vranas that developed improved algorithms to help scientists predict the lifetime of neutrons and answer fundamental questions about the universe. The team built upon its previous work using lattice quantum chromodynamics—a numerical method for calculating the underlying physics of the subatomic particles that make up protons and neutrons. In addition to optimized GPU software, the team developed lightweight, application-agnostic management software capable of managing hundreds of thousands of tasks. Using GPU-accelerated systems Sierra at Lawrence Livermore and the OLCF’s Summit, the team was able to start 1,056 four-node jobs on 4,224 nodes in 5 minutes, achieving a machine-to-machine speedup of factors of 10 and 15, respectively, over the OLCF’s previous leadership-class system, Titan. The achievement supplies nuclear physicists with the necessary computational power to support the experimental search for new physics.

 

UT-Battelle manages ORNL for the Department of Energy’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit https://science.energy.gov/.

Jonathan Hines

Jonathan Hines is a science writer for the Oak Ridge Leadership Computing Facility.