Skip to main content

Exascale Modeling of Complex Fluid Flows in Wind Turbines and Large-Scale Wind Farms

PI: Michael Sprague, National Renewable Energy Laboratory

In 2016, the Department of Energy’s (DOE’s) Exascale Computing Project (ECP) set out to prepare advanced software for the arrival of exascale-class supercomputers capable of 1 quintillion or more calculations per second. That meant rethinking, reinventing and optimizing dozens of scientific applications and software tools to leverage exascale’s thousand-fold increase in computing power. That time is now as the first DOE exascale supercomputer — the Oak Ridge Leadership Computing Facility’s (OLCF’s) Frontier — opens to users around the world. “Exascale’s New Frontier” explores the applications and software technology that will expedite scientific discoveries in the exascale era.

The Science Challenge

Wind energy is an important component of U.S. energy production and has tremendous growth potential. Wind turbines are the largest rotating machines in the world, and some measure over 800 feet tall and weigh more than 200 tons. Average turbine blades are 200 feet long and weigh 35 tons and can bend and twist to adapt to changing conditions.

According to Department of Energy reports, wind energy provides almost 10% of the nation’s electricity. On average, a newly installed, land-based wind turbine has a capacity of 3 megawatts. That means that a large wind farm with hundreds of wind turbines can generate enough electricity to power millions of homes.

Wind energy is a rapidly growing sector in the U.S. due in large part to technological innovations and cost reductions. However, despite tremendous advances, realizing wind energy’s true potential requires radically enhanced tools for modeling these complex multiphysics, multiscale systems. Developing a virtual laboratory based on validated models used to predict airflow is key to optimizing the performance of the current wind turbine fleet and testing new turbine designs.

wind turbine visualization

A rendering from an ExaWind simulation of 5-megawatt-class turbines exhibiting the complicated airflow in the system. Credit: Nicholas Brunhart-Lupo/NREL.

Why Exascale?

Such a virtual laboratory requires the ability to simulate and predict complex airflows, not just around single turbine blades but across entire wind farms. This means calculating turbulent fluid flows in microscopic grid points over myriad surfaces across miles of varying terrains and atmospheric conditions.

“Solving the fluid dynamics of wind farms is a multiscale problem. That means if we want to capture the large wakes that come off these giant turbines, then we have to capture the very small scales where everything starts — the extremely thin layers of air around the blades and the complicated deformations of the blades,” said Michael Sprague, chief wind computational scientist at the National Renewable Energy Laboratory.

“Then, we want to know how those wakes interact with turbines downwind that could be thousands of feet away. We need to know how those different dynamics work in different terrains — flat, hilly or offshore. We then need to simulate each of those terrains under different atmospheric conditions including hot, unstable air that can contain lots of turbulent mixing as well as cooler, more stable conditions with steady winds that produce longer wakes. And ocean conditions are something else entirely.”

One of these high-fidelity simulations can include billions of equations that can take weeks to solve, even on modern exascale supercomputers.

Frontier Success

Enter ExaWind, a suite of computational fluid dynamics codes optimized to run on the world’s most powerful exascale supercomputers — machines capable of performing a quintillion, or a billion-billion, calculations per second. The goal of the ExaWind project is to develop predictive simulations of wind farms that contain dozens of megawatt-class turbines spread across 30 square miles.

Using the Frontier supercomputer, the ExaWind team performed some of the highest fidelity wind energy simulations to date. Scaling up the codes used on previous leadership-class supercomputers to leverage Frontier’s AMD Instinct MI250X GPU accelerators, the team ran simulations using approximately 4,000 of Frontier’s 9,408 nodes, doing so with remarkable efficiency.

“The project was a huge success. We exceeded our challenge problem parameters by 2×. We set out to simulate 20 billion grid points. We were able to run almost 40 billion grid points,” Sprague said. “We ran up 44% of the Frontier system using the new GPUs and were able to do all the physics that we said we would use. And we’re still not done.”

As part of the ECP, ExaWind was also optimized for NVIDIA and Intel GPUs so that researchers can run wind simulations on more of the nation’s leading machines.

What’s Next?

With further refinement, simulating large, multiscale systems will also provide a foundation for developing surrogate models trained by machine learning and artificial intelligence to account for more specific, unresolved scales within the larger simulation.

To help the U.S. achieve its goal of reducing the cost of floating offshore wind energy by 70% by 2035, the team is now focusing on simulating offshore wind turbines on floating platforms in deep-sea environments as part of DOE’s Floating Offshore Wind Shot. These simulations introduce new complex physics, including wind-wave interactions, large platform motions, and the dynamics of a turbine’s mooring lines and cables.

“There’s a tremendous opportunity to capture that energy, but we have numerous science and engineering challenges that need to be addressed,” Sprague said. “High-performance computing and high-fidelity modeling are foundational steps to making this happen.”

ExaWind is a collaborative effort between ORNL, NREL, Sandia National Laboratories, and Lawrence Berkeley National Laboratory. ExaWind is being advanced for floating offshore wind under the FLOWMAS Energy Earthshot Research Centers, which are supported by the DOE Office of Science, and the Floating Turbine High-Fidelity Modeling project, which is supported by the DOE Wind Energy Technologies Office.

Support for this research came from the ECP, DOE’s Energy Efficiency & Renewable Energy’s Wind Energy Technologies Offices, and the Office of Science’s Advanced Scientific Computing Research program. The OLCF is a DOE Office of Science user facility.

UT-Battelle LLC manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit  https://energy.gov/science.

Jeremy Rumsey

Jeremy Rumsey is a senior science writer and communications specialist at Oak Ridge National Laboratory's Oak Ridge Leadership Computing Facility. He covers a wide range of science and technology topics in the field of high-performance computing.