The Eos system is available to all OLCF INCITE and ALCC projects and prioritized as a support resource for projects running or preparing to run production and leadership capability jobs on Titan. Suitable uses of Eos include tool and application porting, software verification and optimization, software generation, and small-scale jobs that perform functions such as verifying input files, geometries, and physics parameterizations for the purpose of preparing capability jobs to run on Titan.

Eos is a 736-node Cray XC30 cluster with a total of 47.104 TB of memory. The processor is the Intel® Xeon® E5-2670. Eos uses Cray’s Aries interconnect in a network topology called Dragonfly. Aires provides a higher bandwidth and lower latency interconnect than Gemini. Support for I/O on Eos is provided by (16) I/O service nodes. The system has (2) external login nodes.

Compute Nodes

The compute nodes are organized in blades. Each blade contains (4) nodes connected to a single Aries interconnect. Every node has (64) GB of DDR3 SDRAM and (2) sockets with (8) physical cores each.

In total, the Eos compute partition contains 11,776 traditional processor cores (23,552 logical cores with Intel Hyper-Threading enabled), and 47.104 TB of memory.


Intel’s Hyper-threading (HT) technology allows each physical core to work as two logical cores so each node can functions as if it has (32) cores. Each of the two logical cores can store a program state, but they share most of their execution resources. Each application should be tested to see how HT impacts performance. The best candidates for a performance boost with HT are codes that are heavily memory-bound. The default setting on Eos is to execute with HT. Users may implicitly disable HT by launching fewer than 16 threads per node or explicitly disable HT by passing the -j1 option to aprun. More detailed information about HT is available on the Hyper Threading section of the Running Jobs page.

File Systems

The OLCF’s center-wide Lustre® file system, named Spider, is available on Eos for computational work. With over 26,000 clients and (32) PB of disk space, it is one of the largest-scale Lustre® file systems in the world. A NFS-based file system provides User Home storage areas and Project Home storage areas. Additionally, the OLCF’s High Performance Storage System (HPSS) provides archival spaces.

Access and Resource Accounting

The charging factor for usage is (30) core hours per node.

Programming Environment

The default compiler and programming environment are Intel. The following programming environment modules are available on Eos:

  • PrgEnv-pgi
  • PrgEnv-gnu
  • PrgEnv-cray
  • PrgEnv-intel

Most of the software and libraries provided on Titan for use with CPUs are also provided on Eos. One notable difference is that Eos has the MKL rather than AMCL math libraries.