Rhea is a 521-node commodity-type Linux cluster. The primary purpose of Rhea is to provide a conduit for large-scale scientific discovery via pre/post processing and analysis of simulation data generated on Titan. Users with accounts on Titan will automatically be given access to Rhea.
Rhea contains 521 compute nodes separated into two partitions:
|rhea (default)||512||128GB||–||[2x] Intel® Xeon® E5-2650 @ 2.0 GHz – 8 cores, 16 HT
(for a total of 16 cores, 32 HT per node)
|gpu||9||1TB||[2x] NVIDIA® K80||[2x] Intel® Xeon® E5-2695 @ 2.3 GHz – 14 cores, 28 HT
(for a total of 28 cores, 56 HT per node)
The first 512 nodes make up the rhea partition, where each node contains two 8-core 2.0 GHz Intel Xeon processors with Intel’s Hyper-Threading (HT) Technology and 128GB of main memory. Each CPU in this partition features 8 physical cores, for a total of 16 physical cores per node. With Intel® Hyper-Threading Technology enabled, each node has 32 logical cores capable of executing 32 hardware threads for increased parallelism.
Rhea also has nine large memory/GPU nodes, which make up the gpu partition. These nodes each have 1TB of main memory and two NVIDIA K80 GPUs in addition to two 14-core 2.30 GHz Intel Xeon processors with HT Technology. Each CPU in this partition features 14 physical cores, for a total of 28 physical cores per node. With Hyper-Threading enabled, these nodes have 56 logical cores that can execute 56 hardware threads for increased parallelism.
Rhea features a 4X FDR Infiniband interconnect, with a maximum theoretical transfer rate of 56 Gb/s.
Rhea features (4) login nodes which are identical to the compute nodes, but with 64GB of RAM. The login nodes provide an environment for editing, compiling, and launching codes onto the compute nodes. All Rhea users will access the system through these same login nodes, and as such, any CPU- or memory-intensive tasks on these nodes could interrupt service to other users. As a courtesy, we ask that you refrain from doing any analysis or visualization tasks on the login nodes.
The OLCF’s center-wide Lustre® file system, named Spider, is available on Rhea for computational work. With over 26,000 clients and (32) PB of disk space, it is one of the largest-scale Lustre® file systems in the world. A NFS-based file system provides User Home storage areas and Project Home storage areas. Additionally, the OLCF’s High Performance Storage System (HPSS) provides archival spaces.