titan

Up since 11/8/17 02:45 pm

eos

Up since 11/14/17 11:20 pm

rhea

Up since 10/17/17 05:40 pm

hpss

Up since 11/20/17 09:15 am

atlas1

Up since 11/15/17 07:25 am

atlas2

Up since 11/27/17 10:45 am
OLCF User Assistance Center

Can't find the information you need below? Need advice from a real person? We're here to help.

OLCF support consultants are available to respond to your emails and phone calls from 9:00 a.m. to 5:00 p.m. EST, Monday through Friday, exclusive of holidays. Emails received outside of regular support hours will be addressed the next business day.

Rhea System Overview

See this article in context within the following user guides: Rhea

Rhea is a (521)-node commodity-type Linux cluster. The primary purpose of Rhea is to provide a conduit for large-scale scientific discovery via pre/post processing and analysis of simulation data generated on Titan. Users with accounts on Titan will automatically be given an account on Rhea.

Compute Nodes

Rhea contains (521) Dell PowerEdge compute nodes. The compute nodes are separated into two partitions:

Partition Node Count Memory GPU CPU
rhea (default) 512 128GB dual Intel® Xeon® E5-2650 @ 2.0 GHz 16 cores, (32) HT
gpu 9 1TB 2 NVIDIA® K80 dual Intel® Xeon® E5-2695 @ 2.3 GHz 28 cores, (56) HT

Both compute partitions are accessible through the same batch queue from Rhea’s login nodes.

Each CPU in the rhea partition features (8) physical cores, for a total of (16) physical cores per node. With Intel® Hyper-Threading Technology enabled the node has (32) logical cores capable of executing (32) hardware threads for increased parallelism. On the gpu partition, there are (14) physical cores, for a total of (28) physical cores per node. With Hyper-Threading enabled, these nodes have (56) logical cores that can execute (56) hardware threads for increased parallelism. This gpu partition also has 1TB of memory and 2 K80 GPUs per node. Rhea also features a 4X FDR Infiniband interconnect, with a maximum theoretical transfer rate of 56 Gb/s.

Login Nodes

Rhea features (4) login nodes which are identical to the compute nodes, but with 32GB of RAM. The login nodes provide an environment for editing, compiling, and launching codes onto the compute nodes. All Rhea users will access the system through these same login nodes, and as such, any CPU- or memory-intensive tasks on these nodes could interrupt service to other users. As a courtesy, we ask that you refrain from doing any analysis or visualization tasks on the login nodes.

File Systems

The OLCF’s center-wide Lustre® file system, named Spider, is available on Rhea for computational work. With over 26,000 clients and (32) PB of disk space, it is one of the largest-scale Lustre file system in the world. A separate, NFS-based file system provides $HOME storage areas, and an HPSS-based file system provides Rhea users with archival spaces.