People - Written by on October 18, 2012

Educating Future Generations in Supercomputing Basics

Tags: , ,

OLCF holds annual training course in HPC fundamentals

The HPC fundamentals course was aimed at employees and interns at ORNL interested in learning more about high-performance computing.

High-performance computing (HPC) became a little less mysterious this summer for interested interns and employees at ORNL. For the second year in a row the OLCF offered an eight-week crash course in supercomputing.

“Jaguar, Titan, and HPC are terms that many people are seeing in news articles or hearing about,” said ORNL’s Bobby Whitten. “This course is designed to give more insight into HPC and to provide a glimpse into what the Oak Ridge Leadership Computing Facility’s mission is. The course is designed to introduce participants to basic concepts of HPC.”

Whitten assists with the education program for the OLCF and is group leader of the User Assistance Group for the National Institute for Computational Sciences.

HPC is all about speed, which is one reason why most HPC systems use Unix or a similar command-line operating system. Because it eschews the graphical interface used with consumer products such as the Mac and Windows operating systems, Unix can devote nearly all its efforts to churning through calculations.

“Most people have never used the Unix operating system before. They’re used to clicking buttons, looking at pretty pictures, and updating their Facebook status,” Whitten said.

Week one, therefore, began with a lesson in the Unix command-line interface.

The course also introduced students to parallel computing. The time is long past when an HPC system could get its power from a single processor. Instead, modern supercomputers divide a large problem into many smaller problems in order to get a solution more quickly.

This approach, however, has its challenges. Whitten demonstrated this point by presenting the class with a common task in which each student acted like an independent processor assigned to a specific portion of the job. He used an analogy, developed by Henry Neeman from Oklahoma State University, of putting together a jigsaw puzzle. Theoretically the more people you have working on the puzzle the less time it should take—right? Well, kind of.

The class next learned the term “contention,” in which processors needing the same data must communicate to access that data. This communication requires time and reduces overall efficiency. The class also learned about load balancing, where computing tasks are divided so that some processors do not wait idly while others finish their jobs, thereby negating the benefit of all those helping hands.

“If we are a very expensive processor sitting in a very expensive machine like Jaguar then we are wasting money if we can’t keep all processors as busy as possible,” Whitten said. By using effective algorithms and communication strategies to distribute work, computational scientists can use resources more efficiently.

By the end of the course the class built a network by connecting multiple Mac Minis using programs that they themselves wrote. Although the class used machines not as powerful as Jaguar, the principles taught translate to all things HPC.

“Getting the message out about HPC is important,” Whitten said. “Computational science has become an important way of doing science. It is in the best interests of everyone to expose as many people as possible to these ideas to maintain America’s leadership in science, technology, engineering and mathematics.”—by Jeremy Rumsey