titan

Up since 11/8/17 02:45 pm

eos

Up since 11/14/17 11:20 pm

rhea

Up since 10/17/17 05:40 pm

hpss

Up since 11/20/17 09:15 am

atlas1

Up since 11/15/17 07:25 am

atlas2

Up since 11/27/17 10:45 am
OLCF User Assistance Center

Can't find the information you need below? Need advice from a real person? We're here to help.

OLCF support consultants are available to respond to your emails and phone calls from 9:00 a.m. to 5:00 p.m. EST, Monday through Friday, exclusive of holidays. Emails received outside of regular support hours will be addressed the next business day.

Login vs Compute Nodes on Commodity Clusters

See this article in context within the following user guides: Lens
Login Nodes

When you log into an OLCF cluster, you are placed on a login node. Login node resources are shared by all users of the system. Because of this, users should be mindful when performing tasks on a login node.

Login nodes should be used for basic tasks such as file editing, code compilation, data backup, and job submission. Login nodes should not be used for memory or processing intensive tasks. Users should also limit the number of simultaneous tasks performed on the login resources. For example, a user should not run (10) simultaneous tar processes on a login node.

Warning: Processor-intensive, memory-intensive, or otherwise disruptive processes running on login nodes may be killed without warning.
Compute Nodes

Memory and processor intensive tasks as well as production work should be performed on a cluster’s compute nodes. Access to compute nodes is managed by the cluster’s batch scheduling system (e.g., Torque/MOAB).

Rhea’s compute nodes are separated into two partitions:

rhea
Jobs that do not specify a partition will run in the rhea partition
512 nodes each with 128GB memory.

gpu
To access the gpu partition, batch job submissions should request -lpartition=gpu
9 nodes each with 1TB memory and 2 K80 GPUs