titan

Up since 11/8/17 02:45 pm

eos

Up since 11/14/17 11:20 pm

rhea

Up since 10/17/17 05:40 pm

hpss

Up since 11/20/17 09:15 am

atlas1

Up since 11/15/17 07:25 am

atlas2

Up since 11/27/17 10:45 am
OLCF User Assistance Center

Can't find the information you need below? Need advice from a real person? We're here to help.

OLCF support consultants are available to respond to your emails and phone calls from 9:00 a.m. to 5:00 p.m. EST, Monday through Friday, exclusive of holidays. Emails received outside of regular support hours will be addressed the next business day.

Eos Overview

See this article in context within the following user guides: Eos
Overview

Starting March 3rd, 2014, the Eos system will be available to all OLCF projects and prioritized as a support resource for projects running or preparing to run production and leadership capability jobs on Titan. Suitable uses of Eos include tools and application porting, software verification and optimization, software generation, and small-scale jobs that perform functions such as verifying input files, geometries, and physics parameterizations for the purpose of preparation of capability jobs on Titan.

Eos is a 736-node Cray XC30 cluster with a total of 47.104 TB of memory. The processor is the Intel® Xeon® E5-2670. Eos uses Cray’s Aries interconnect in a network topology called Dragonfly. Aires provides a higher bandwidth and lower latency interconnect than Gemini. Support for I/O on Eos is provided by (16) I/O service nodes. The system has (2) external login nodes.

Compute Partition

The compute nodes are organized in blades. Each blade contains (4) nodes connected to a single Aries interconnect. Every node has (64) GB of DDR3 SDRAM and (2) sockets with (8) physical cores each.

In total, the Eos compute partition contains 11,776 traditional processor cores (23,552 logical cores with Intel Hyper-Threading enabled), and 47.104 TB of memory.

Hyper-threading

Intel’s Hyper-threading (HT) technology allows each physical core to work as two logical cores so each node can functions as if it has (32) cores. Each of the two logical cores can store a program state, but they share most of their execution resources. Each application should be tested to see how HT impacts performance. The best candidates for a performance boost with HT are codes that are heavily memory-bound. The default setting on Eos is to execute with HT. Users may implicitly disable HT by launching fewer than 16 threads per node or explicitly disable HT by passing the -j1 option to aprun. More detailed information about HT is available on the Hyper Threading page.

File Systems

The OLCF’s center-wide Lustre® file system, named Spider, is available on Titan for computational work. With over (32) PB of disk space, it is the largest-scale Lustre file system in the world. A separate, NFS-based filesystem provides $HOME storage areas, and an HPSS-based file system provides Titan users with archival spaces.

Access and Resource Accounting

The charging factor for usage is (30) core hours per node.

Programming Environment

The default compiler and programming environment are Intel. The following programming environment modules are available on Eos:

  • PrgEnv-pgi
  • PrgEnv-gnu
  • PrgEnv-cray
  • PrgEnv-intel

Most of the software and libraries provided on Titan for use with CPUs is also be provided on Eos. One notable difference is that Eos has the MKL rather than AMCL math libraries.