Print this article
Starting March 3rd, 2014, the Eos system will be available to all OLCF projects and prioritized as a support resource for projects running or preparing to run production and leadership capability jobs on Titan. Suitable uses of Eos include tools and application porting, software verification and optimization, software generation, and small-scale jobs that perform functions such as verifying input files, geometries, and physics parameterizations for the purpose of preparation of capability jobs on Titan.
Eos is a 736-node Cray XC30 cluster with a total of 47.104 TB of memory. The processor is the Intel® Xeon® E5-2670. Eos uses Cray’s Aries interconnect in a network topology called Dragonfly. Aires provides a higher bandwidth and lower latency interconnect than Gemini. Support for I/O on Eos is provided by (16) I/O service nodes. The system has (2) external login nodes.
The compute nodes are organized in blades. Each blade contains (4) nodes connected to a single Aries interconnect. Every node has (64) GB of DDR3 SDRAM and (2) sockets with (8) physical cores each.
In total, the Eos compute partition contains 11,776 traditional processor cores (23,552 logical cores with Intel Hyper-Threading enabled), and 47.104 TB of memory.
Intel’s Hyper-threading (HT) technology allows each physical core to work as two logical cores so each node can functions as if it has (32) cores. Each of the two logical cores can store a program state, but they share most of their execution resources. Each application should be tested to see how HT impacts performance. The best candidates for a performance boost with HT are codes that are heavily memory-bound. The default setting on Eos is to execute with HT. Users may implicitly disable HT by launching fewer than 16 threads per node or explicitly disable HT by passing the
-j1 option to
aprun. More detailed information about HT is available on the Hyper Threading page.
The OLCF’s center-wide Lustre® file system, named Spider, is available on Titan for computational work. With over (32) PB of disk space, it is the largest-scale Lustre file system in the world. A separate, NFS-based filesystem provides $HOME storage areas, and an HPSS-based file system provides Titan users with archival spaces.
Access and Resource Accounting
The charging factor for usage is (30) core hours per node.
The default compiler and programming environment are Intel. The following programming environment modules are available on Eos:
Most of the software and libraries provided on Titan for use with CPUs is also be provided on Eos. One notable difference is that Eos has the MKL rather than AMCL math libraries.