ORNL Educating current, future HPC leaders

The OLCF hosted three summer workshops aimed at various levels of HPC experience

The OLCF hosted three summer workshops aimed at various levels of HPC experience

The OLCF offered three workshops this summer to fit anyone’s talents and interests.

The biggest draw was the “OpenACC Tutorials” workshop cosponsored by Cray and AMD. This three-day workshop, organized by the OLCF’s Training Coordinator Fernanda Foertter, was hands-on and allowed users of Titan to port and optimize their applications. There were 70 participants in total, with 35 joining Foertter via live webcast.

OpenACC featured attendees from industry (Rolls Royce), universities (Georgia Tech, Princeton, and Stony Brook), national labs (Sandia and ORNL), and other government agencies (National Oceanic and Atmospheric Administration).

“OpenACC allows for programmers to essentially comment or send out portions of their application’s code to utilize Titan’s GPUs,” said Foertter. “Certain aspects of a program, such as logical statements, are performed better by the CPUs, while purely arithmetic operations are done much better by the GPUs. By using OpenACC, the programmer can tell their code exactly where to do what.”

For those new to supercomputing and parallel computing, the OLCF, alongside the National Institute for Computational Sciences (NICS), held a “Crash Course in Supercomputing” June 18–19. This two-day event was also organized by Foertter.

“This was an outreach effort so that people who aren’t normally in this industry can become familiar with what we do here,” said Bobby Whitten, OLCF Education Coordinator. “These fundamental courses are meant to bring people into the world of parallel processing, but sometimes they are not even aware of what that means. So it’s really an introduction, an appetizer for them to see what is really involved when you say ‘supercomputing.’”

Approximately 40 summer students, interns, and faculty members attended the course, including 6 who were able to follow the course online. On the first day, they learned the basics, starting with an overview of the Unix environment, some common Unix commands, and the vi editor. From there, they learned some programming fundamentals, specifically concentrating on the basics of makefiles and programming with the C language. Finally, they used what they had learned by programming, compiling, and running a program of their own.

The second day was dedicated to going parallel, with participants introduced to the two leading parallel programming libraries, MPI and openMP. As on the day before, they were able to put together all the concepts by programming, compiling, and running their own parallel code on one of the National Center for Computational Sciences (NCCS) supercomputers.

For those who were moderately familiar with parallel computing, the OLCF held the workshop “Programming with Big Data in R: pbdR” on June 17. This half-day event, organized by George Ostrouchov, senior research staff member from the Scientific Data Group in the Computer Science and Mathematics division, introduced the approximately 20 attendees to a statistical programming language called R and its high-performance extension, aptly named Programming with Big Data, or pbdR for short.

The first portion of the workshop focused solely on R. Attendees learned how to use this integrated suite of software to look at data manipulation, calculation, and graphical display with in-depth examples. Once everyone was familiar with these examples, attendees quickly progressed to pbdR in the second, longer portion of the workshop. The session closed by revisiting the in-depth examples from the first session, but with some new twists.

“People are starting to hear about R, getting curious about it, and wanting to learn it, and that has only been in the last couple of years,” said Whitten, who helped organize the event. “It seems to be the best programming environment out there now for doing big data analysis. You have a lot more control, you have a lot more ability to look at data in interesting ways that you could not do before, either because of the size of the data or the complexity of the data, that another programming tool could not handle on its own. This workshop was really a way of getting people familiar with R at least to start the conversation.” —by Austin Koenig