It has been basic United States policy that Government should foster the opening of new frontiers. It opened the seas to clipper ships and furnished land for pioneers. Although these frontiers have more or less disappeared, the frontier of science remains.
Exascale is the next level of computing performance. By solving calculations five times faster than today’s top supercomputers—exceeding a quintillion, or 1018, calculations per second—Frontier will enable scientists to develop new technologies for energy, medicine, and materials.
Watch VideoFusion energy could one day be a transformative energy source, because it will be clean, cheap, and nearly unlimited, with sea water supplying its basic fuel. A whole-device computer model can offer insights about the plasma processes that go on in the fusion device and predictions regarding the performance and optimization of next-step experimental facilities. Using Frontier, we will be able to add new capabilities to the whole-device model, including the effects of the plasma boundary, the effects of fusion products, the influence of sources of heating, and the superimposed engineering structure that would make a fusion reactor operate as a unit.Read Full Q&A
Studying materials at exascale could have a significant impact on our world, because materials show up everywhere in the economy. Using a combination of advanced methods and scalable codes on Frontier, we’ll be able to perform simulations with potential millionfold increases in our time scales. We’ll also be able to do one-to-one comparisons with experiments and make better predictions about the evolution of these systems.Read Full Q&A
Combustion systems are projected to dominate the energy marketplace for decades to come. One engine concept—a low-temperature, reactivity-controlled, compression ignition engine—has the potential to deliver groundbreaking efficiencies of up to 60 percent while reducing emissions. On Frontier, we anticipate using high-fidelity simulations with machine learning and A.I. to model the underlying processes of this promising engine.
The thing that’s really attractive about Frontier is the powerful nodes. Having fewer powerful nodes with a very tightly integrated set of CPUs and GPUs at the node-level gives us the ability to distribute hundreds or thousands of microstructure and property calculations on one or a few nodes across the machine. With Frontier, we’re going to be able to predict the microstructure and properties of an additively manufactured part at much higher fidelity and in many more locations within a part than we are able to even with the world’s current fastest supercomputers.Read Full Q&A
Exascale will enable cosmology simulations large enough to model the distribution of billions of galaxies but also fine-grained enough to compare to a range of ground- and satellite-based observations, such as cosmic microwave background measurements and radio, optical, and x-ray data sets. At the same time, Frontier’s AI-oriented technology will enable us to analyze data from simulations in ways we simply can’t today.
We’re really excited about having another GPU-based machine here at Oak Ridge. Over the past several years, we’ve spent a lot of effort optimizing our codes to make them run efficiently on GPU-based architectures, so we’re looking forward to continuing that trend with Frontier.Read Full Q&A
ECP Software Technology is excited to be a part of preparing the software stack for Frontier. We are already on our way, using Summit and Sierra as launching pads. Working with OLCF, Cray, and AMD, we look forward to providing the programming environments and tools, and math, data and visualization libraries that will unlock the potential of Frontier for producing the countless scientific achievements we expect from such a powerful system. We are privileged to be part of the effort.
As the compute power increases, it provides new opportunities to obtain more and more details regarding the interactions that are driving organisms, ecosystems, and global climate patterns. With Frontier, we could potentially produce even higher resolution calculations to better understand the dynamics of complex systems.Read Full Q&A
Exascale computing will be essential to precisely illuminating phenomena that emerge from neutrino physics experiments and maintaining the superb cross talk that has existed between the quantitative and the qualitative sides of discoveries in particle and nuclear physics. We anticipate that Frontier will provide the compute power and, just as important, the architecture for computation we must have to do our complicated, difficult calculations.
We are approaching a revolution in how we can design and analyze materials. We can look and carefully characterize the electronic structure of fairly simple atoms and very simple molecules right now. But with exascale computing on Frontier, we're trying to stretch that to molecules that consist of thousands of atoms. The more we understand about the electronic structure, the more we're able to actually manufacture and use exotic materials for things like very small, high tensile strength materials and buildings to make them more energy efficient. At the end of the day, everything in some sense comes down to materials.
At the inception of the ECP project we asked researchers to imagine new frontiers in science and engineering enabled by exascale computing. With Frontier, we have the opportunity now to fully realize our original vision, solving grand challenge problems that lead to breakthroughs in areas of energy generation, materials design, earth and space sciences, and related fields of physics and engineering.
Free-electron X-ray laser facilities, such as the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory, produce ultrafast pulses from which scientists take stop-action pictures of moving atoms and molecules for research in physics, chemistry, and biology. For example, LCLS will be able to reconstruct biological structures in unprecedented atomic detail under physiological conditions. We foresee that access to Frontier will enable the LCLS users to achieve not only higher resolution and significantly deeper scientific insight than are possible today but also a dramatically increased image reconstruction rate for the delivery of information in minutes rather than weeks.
ExaStar is an effort to build multiphysics models of stellar explosions. We want to figure out how space and time get warped by gravitational waves, how neutrinos and other subatomic particles are formed in these explosions, and how the nuclear elements are synthesized. The large amount of very fast memory we’re going to have on Frontier is going to be a real boon to our simulations.Read Full Q&A
One of the US Department of Energy’s goals is to find ways of replacing hydrocarbons derived from fossil fuels with hydrocarbons that are produced using biomass. Scientists haven’t been able to model the molecular processes central to achieving this goal with the rigor needed because these processes involve hundreds to thousands of atoms that must be modeled to chemical accuracy to pin down the mechanism for converting biomass-derived molecules into usable fuels. Combining innovations in molecular theory, modeling software, and the underlying algorithms with access to world leading supercomputers such as Frontier is key to our project’s success.
Delivered in 2021 and open for early operations in 2022, Frontier is accelerating innovation in science and technology and maintaining US leadership in high-performance computing and artificial intelligence. Frontier users will model the entire lifespan of a nuclear reactor, uncover disease genetics, and build on recent developments in science and technology to further integrate artificial intelligence with data analytics and modeling and simulation.
The system is based on HPE Cray’s new EX architecture and slingshot interconnect with optimized 3rd Gen AMD EPYC CPUs for HPC and AI, and AMD Instinct 250X accelerators.
System Specs | Titan | Summit | Frontier |
---|---|---|---|
Peak Performance | 27 PF | 200 PF | 1.6 EF |
Cabinets | 200 | 256 | 74 |
Node | 1 AMD Opteron CPU 1 NVIDIA K20X Kepler GPU |
2 IBM POWER9™ CPUs 6 NVIDIA Volta GPUs |
1 HPC and AI Optimized 3rd Gen AMD EPYC CPU 4 Purpose Built AMD Instinct 250X GPUs |
CPU-GPU Interconnect | PCI Gen2 | NVLINK Coherent memory across the node |
AMD Infinity Fabric |
System Interconnect | Gemini | 2x Mellanox EDR 100G InfiniBand Non-Blocking Fat-Tree |
Multiple Slingshot NICs providing 100 GB/s network bandwidth. Slingshot network which provides adaptive routing, congestion management and quality of service. |
Storage | 32 PB, 1 TB/s, Lustre Filesystem | 250 PB, 2.5 TB/s, GPFS™ | 2-4x performance and capacity of Summit’s I/O subsystem. Frontier will have near node storage like Summit. |
Oak Ridge National Laboratory has decades of experience in delivering, operating, and conducting research on world-leading supercomputers. Since 2005, Oak Ridge National Laboratory has deployed Jaguar, Titan, and Summit, each the world’s faster computer in its time. Frontier has leveraged ORNL’s extensive experience and expertise in GPU-accelerated computing to become the US Department of Energy’s next record-breaking supercomputer and the world’s first exascale system.
In preparation for the Frontier supercomputer, OLCF selected eight research projects to participate in the Frontier Center for Accelerated Application Readiness (CAAR) program. Through Frontier CAAR, the OLCF is partnering with application core developers, vendor partners, and OLCF staff members to optimize simulation, data-intensive, and machine learning scientific applications for exascale performance, ensuring that Frontier is able to perform large-scale science for users starting in 2022. Consisting of application core developers and staff from the OLCF, the partnership teams have received technical support from HPE Cray and AMD – Frontier’s primary vendors – and access to multiple early-generation hardware platforms prior to system deployment.
Learn More