High-pressure turbines are complex pieces of engineering and vital components of gas turbines used to propel jet engines. The more efficient these jet engines are, the better the outcomes for the aircraft industry. The turbine blades rotate behind the combustion chamber in an engine, and they must endure the hot gas that spews out, accelerating them to high speeds. Because they are crucial to powering aircraft, scientists aim to study them in extreme detail to achieve greater operating efficiency and thus cost savings.
But these large, dynamic systems are difficult to study via experiments and physical testing. Modeling them on supercomputers allows engineers to have an unimpeded view of what occurs during operation, but these simulations are complex, requiring massive supercomputing resources to capture the different scales—from blade-size to tiny eddies, or circular fluid movements, down to microns—needed to fully understand this phenomenon in 3D.
“In a gas turbine—or any turbine—you have both moving and stationary parts, and the question is how those interact,” said Richard Sandberg, chair of computational mechanics in the Department of Mechanical Engineering at the University of Melbourne. “This is important because the flow of hot gases that’s coming off one blade of a turbine is swirling violently as it hits the next blade.” Turbines can have anywhere from 10 to 14 rows of around 50 rotating blades, making these interactions more crucial than ever to understand.
A team led by scientists at General Electric (GE) Aviation and the University of Melbourne has been using supercomputers to model these turbulent flows, tumultuous mixtures of combusted fuel and air, for the last decade to better determine the effects of turbulence on performance. The problem, though, is that the models often used for turbulence are not entirely accurate.
“In turbulence modeling, you generally don’t capture all these different scales,” Sandberg said. “You use a model that accounts for the effects of what is happening at the smaller scales, but these models have been tuned for certain situations. The problem with this is that as soon as you try something different, you won’t know whether your model is going to give you the right answer.”
The team knew they needed more supercomputing power to run a problem with the multiple scales needed to get a better grasp on turbulent flows. The researchers were awarded an allocation of computing time on the 200-petaflop Summit supercomputer, the flagship system of the Oak Ridge Leadership Computing Facility, a US Department of Energy (DOE) Office of Science User Facility at DOE’s Oak Ridge National Laboratory. The computing time was funded by the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program.
The researchers set their sights on simulating a high-pressure turbine in a real engine condition, a feat that has been impossible—until now.
Modeling at real engine conditions
Equipped with the most powerful supercomputer in the nation, the team embarked on a journey to model the turbulent flows close to the blades of an engine’s high-pressure turbine. This knowledge will help engineers understand how heat is transferred near the blades, enabling them to design engines with longer lifespans and ensure that their components won’t fail—a measure that is important for both safety and cost savings.
“If you can understand where it is going to be hot, you can protect your blade,” Sandberg said. “You can use that information to design components that are going to last longer, which means your engine overall will last longer and your maintenance intervals will be longer.
The team used the High-Performance Solver for Turbulence and Aeroacoustic Research (HiPSTAR) code to study the aerodynamics of the first row of blades in a high-pressure turbine at real-engine conditions on Summit.
“Our flow speeds and everything else were similar to what you would really have inside of the engine,” said Sriram Shankaran, a consulting engineer at GE Aviation. “We set up different cases with different types of conditions that mimicked the gas that comes out of the combustor, and we are analyzing the results to look at the effect of this turbulence on how the flow evolves as it goes through that first stage of the turbine.”
Using the HiPSTAR code on Summit, the team ran for the first time real-engine cases—a total of five of them—capturing the largest eddies all the way down to those that were tens of microns away from the blade surface. Specifically, they performed direct numerical simulations (DNS), computational fluid dynamics simulations that directly capture the full range of scales of turbulence without using separate models that can only estimate turbulent effects.
“A DNS is the highest resolution simulation you can run, and these are as close to reality as you can get,” Shankaran said. “You can’t even get this kind of precision experimentally, but with Summit, we can.”
The cases used different Mach numbers, which describe the flow’s velocity compared with the speed of sound. Each simulation took 4–6 weeks to run on the mammoth Summit system, making the simulations some of the most computationally intensive turbomachinery calculations ever run to date.
Better engine design, big cost savings
From the simulations, the researchers determined which regions near a turbine blade experience a greater loss of energy. For the case with the highest Mach number, they discovered an extra loss of energy resulting from strong shock waves, or violent changes in pressure, that interact with the edge and wake of the flow to cause a massive amount of turbulence.
These team’s simulations allowed them to identify this state of the flow at a distance of tens of microns away from the surface of the turbine blade—closer than the team has ever attempted to perform before. The results were published in the Journal of Turbomachinery.
The simulations also revealed where most of the losses occur and showed where current design tools fail in predicting the correct level of loss.
“These simulations give us a time history of what is happening in the fluid, which reproduces what we believe is happening in nature, and we are getting these at nanosecond intervals with a spatial accuracy of tens of microns away from the blade surfaces,” Shankaran said.
The team’s simulations are helping GE better understand how to optimize the flow through the engine by minimizing turbulence, aiding the design process and leading to better engines.
“We want to make the gas turbine more efficient,” Sandberg said. “And if you understand how the flow of hot gas behaves inside of the turbine, you can adapt your design to extract more power from that flow, which is what the turbine is doing.”
There is a huge improvement to be had in fuel consumption and derivative effects with more accurate prediction of real-engine conditions.
“A 1 percent reduction in fuel consumption across a fleet of engines is equal to about a billion dollars a year in fuel cost savings,” Shankaran said. “Reduced fuel consumption also translates into reduced emissions—a 1 percent reduction in fuel burn reduces CO2 emissions by roughly 1.5 percent. Less fuel on board also reduces the weight of the plane, which can increase the range the plane can fly. Each of these improvements enables us to maintain our edge in a highly competitive global industry. If we don’t keep improving, the competition will catch up.”
For their next simulations, the team has turned to cases that include the walls at the end of the blades, a feature that adds a new layer of complexity and accuracy to the flow.
“We are performing a new set of simulations that include these end walls that generate a whole new flow pattern by themselves,” Sandberg said.
The team is also using the data generated in the simulations to build new models via machine learning. With more accurate turbulence models in hand, engineers can design the next generation of engines using new-and-improved design tools.
“Simulating these kinds of problems is extremely challenging,” Sandberg said. “Five years ago, we just couldn’t do this. We had to go to some kind of model scale. That was the only thing we were able to do. Today, we can simulate engine-like conditions on Summit.”
Related Publications: Y. Zhao and R. D. Sandberg, “High-Fidelity Simulations of a High-Pressure Turbine Vane Subject to Large Disturbances: Effect of Exit Mach Number on Losses,” Journal of Turbomachinery 143, no. 9 (2021), doi:10.1115/1.4050453.
Zhao and R. D. Sandberg, “Using a New Entropy Loss Analysis to Assess the Accuracy of RANS Predictions of an HPT Vane,” Journal of Turbomachinery 142, no. 8 (2020): 081008, doi:10.1115/1.4046531.
The research was supported by DOE’s Office of Science. UT-Battelle LLC manages Oak Ridge National Laboratory for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.