Titan is the first major supercomputing system to utilize a hybrid architecture, or one that utilizes both conventional 16-core AMD Opteron CPUs and NVIDIA Tesla K20 GPU Accelerators.

The combination of CPUs and GPUs will allow Titan and future systems to overcome power and space limitations inherent in previous generations of high-performance computers.

Because they handle hundreds of calculations simultaneously, GPUs can go through many more than CPUs in a given time. Yet they draw only modestly more electricity. By relying on its 299,008 CPU cores to guide simulations and allowing its Tesla K20 GPUs, which are based on NVIDIA's next-generation Kepler architecture to do the heavy lifting, Titan will be approximately ten times more powerful than its predecessor, Jaguar, while occupying the same space and drawing essentially the same level of power.

When complete, Titan will have a theoretical peak performance of more than 20 petaflops, or more than 20,000 trillion calculations per second. This will enable researchers across the scientific arena, from materials to climate change to astrophysics, to acquire unparalleled accuracy in their simulations and achieve research breakthroughs more rapidly than ever before.

Fact Sheets (Download PDFs)

Accelerated Computing

Accelerated Computing

By pairing the CPUs and the GPU accelerators and maximizing the efficiency of applications to exploit their strengths, Titan will lead the way on the road to the exascale.

Historically, researchers moving from single-core to many-core computers had to break their calculations into smaller problems that could be parceled out separately to different processing cores, an approach referred to as parallel computing.

Titan has been developed with the recognition that achieving even greater parallelism by adding hundreds of thousands of computing cores has its limits, primarily for one reason—CPUs are fast but very power hungry. If supercomputers are to move beyond the petascale and into the exascale, they must become more energy efficient.

Accelerators, or in this case GPUs, are pushing parallelism even farther by allowing researchers to continue to divide their larger problems into even smaller parcels than is customary on systems such as Jaguar.

GPUs have many threads of execution. While each one may run slower than traditional threads, there are simply so many threads that a much higher performance can be achieved, with a minimal increase in power consumption.

"You simply can't get these levels of performance, power- and cost-efficiency with conventional CPU-based architectures. Accelerated computing is the best and most realistic approach to enable exascale performance levels within the next decade."

— Steve Scott, NVIDIA chief technology officer



Titan's novel architecture alone is no high-performance computing game-changer without applications capable of utilizing its innovative computing environment.

Titan's hardware is only as good as the research that exploits it. Preparing users is a greater challenge than past upgrades to Jaguar due to Titan's architectural evolution. This means researchers must rethink their problems in a way that might be new to their approach. In response, the OLCF has created the Center for Accelerated Application Readiness, or CAAR, a collaboration among application developers, Titan's manufacturer Cray, GPU manufacturer NVIDIA, and the OLCF's scientific computing experts.

CAAR has been working for nearly 2 years to establish best practices for code-writers. The center is divided into five teams working with five of the OLCF's most advanced and representative applications. Essentially these, and other potential applications, need to be able to keep the GPUs in Titan busy.

CAAR applications include the combustion code S3D; LSMS, which studies magnetic systems; LAMMPS, a bioenergy and molecular dynamics application; Denovo, which investigates nuclear reactors; and CAM-SE, a code that explores climate change.

All of these codes will benefit greatly when able to run at 20 petaflops. For instance, S3D will move beyond modeling simple fuels to tackle complex, larger-molecule hydrocarbon fuels such as isooctane (a surrogate for gasoline) and biofuels such as ethanol and butanol, helping America to achieve greater energy efficiency through improved internal combustion engines. And CAM-SE will be able to increase the simulation speed to between one and five years per computing day. This speed increase is needed to make ultra-high-resolution, full-chemistry simulations feasible over decades and centuries and will allow researchers to quantify uncertainties by running multiple simulations.

Please click here for information on the accelerated codes run on Titan [Microsoft Excel Document].

Science On Day 1

Application Readiness


Illuminating the role of material disorder, statistics, and fluctuations in nanoscale materials and systems.


A molecular description of membrane fusion, one of the most common ways for molecules to enter or exit living cells.


Understanding turbulent combustion through direct numerical simulation with complex chemistry.


Answering questions about specific climate change adaptation and mitigation scenarios; realistically represent features like precipitation patterns / statistics and tropical storms.


Radiation transport – important in astrophysics, laser fusion, combustion, atmospheric dynamics, and medical imaging – computed on AMR grids.


Discrete ordinates radiation transport calculations that can be used in a variety of nuclear energy and technology applications.


Titan Media

The latest Titan media coverage from around the world.