titan

Up since 11/8/17 02:45 pm

eos

Up since 11/14/17 11:20 pm

rhea

Up since 10/17/17 05:40 pm

hpss

Up since 11/20/17 09:15 am

atlas1

Up since 11/15/17 07:25 am

atlas2

Up since 11/27/17 10:45 am
OLCF User Assistance Center

The center's normal support hours are 9 a.m. until 5 p.m. (Eastern time) Monday through Friday, exclusive of holidays. Outside of normal business hours, calls are directed to the ORNL Computer Operations staff. If you require immediate assistance outside of normal business hours, you may contact them at the phone number listed above. If your request is not urgent, you may send an email to help@nccs.gov, where it will be answered by a NCCS User Assistance member the next business day.

Rhea User Guide

Rhea Changes (Big Memory and GPU nodes now available)

Several changes to Rhea’s batch queue structure and hardware have been made. The changes are based on response to user feedback received both in the annual OLCF survey and directly.

The primary purpose of Rhea is to provide a conduit for large-scale scientific discovery via pre/post processing and analysis of simulation data generated on Titan. We want Rhea to meet your needs and fit your OLCF workflow. If Rhea’s configuration, queue structure, hardware, or other hinders or does not fit your needs, please let us know. We can work with you to, where possible, provide exceptions, add or alter hardware, and alter the queue structure.

This section briefly describes the changes.

GPU and Big Memory Node Addition (11/20/2015)

Nine nodes each containing two NVIDIA K80 GPUs and 1TB of memory are now available on Rhea.

Access

The nodes are accessible through the queues on Rhea by specifying the ‘gpu’ partition:

#PBS -lpartition=gpu

GPU partition queue limits
Node Count Duration Policy
1-2 Nodes 0 – 48 hrs max 1 job running
per user

Available Memory Doubled (08/18/2015)

Memory on each of Rhea’s 512 compute nodes has been increased from 64 GB to 128 GB.

Batch Queue Changes (08/18/2015)

Rhea’s batch queue has been simplified to reduce confusion and better fit how the system has been used over the recent years. The new structure is simply:

Bin Node Count Duration Policy
A 1 – 16 Nodes 0 – 48 hr max 4 jobs running and 4 jobs eligible
per user
in bins A, B, and C
B 17 – 64 Nodes 0 – 36 hr
C 65 – 384 Nodes 0 – 3 hr

The new structure was designed based on user feedback and analysis of batch jobs over the recent years. The structure will allow most users to continue running with little to no interruption. In many cases, the changes will allow a user to run more jobs through the queue. However, we understand that the structure may not meet the needs of all users. If this structure limits your use of the system, please let us know. We want Rhea to be a useful OLCF resource and will work with you providing exceptions or even changing the queue structure if necessary.

Contents


1. Rhea System Overview

(Back to Top)

Rhea is a (521)-node commodity-type Linux cluster. The primary purpose of Rhea is to provide a conduit for large-scale scientific discovery via pre/post processing and analysis of simulation data generated on Titan. Users with accounts on Titan will automatically be given an account on Rhea.

Compute Nodes
Rhea contains (521) Dell PowerEdge compute nodes. The compute nodes are separated into two partitions:
Partition Node Count Memory GPU CPU
rhea (default) 512 128GB - dual Intel® Xeon® E5-2650 @ 2.0 GHz 16 cores, (32) HT
gpu 9 1TB 2 NVIDIA® K80 dual Intel® Xeon® E5-2695 @ 2.3 GHz 28 cores, (56) HT
Both compute partitions are accessible through the same batch queue from Rhea's login nodes. Each CPU in the rhea partition features (8) physical cores, for a total of (16) physical cores per node. With Intel® Hyper-Threading Technology enabled the node has (32) logical cores capable of executing (32) hardware threads for increased parallelism. On the gpu partition, there are (14) physical cores, for a total of (28) physical cores per node. With Hyper-Threading enabled, these nodes have (56) logical cores that can execute (56) hardware threads for increased parallelism. This gpu partition also has 1TB of memory and 2 K80 GPUs per node. Rhea also features a 4X FDR Infiniband interconnect, with a maximum theoretical transfer rate of 56 Gb/s.
Login Nodes
Rhea features (4) login nodes which are identical to the compute nodes, but with 32GB of RAM. The login nodes provide an environment for editing, compiling, and launching codes onto the compute nodes. All Rhea users will access the system through these same login nodes, and as such, any CPU- or memory-intensive tasks on these nodes could interrupt service to other users. As a courtesy, we ask that you refrain from doing any analysis or visualization tasks on the login nodes.
File Systems
The OLCF's center-wide Lustre® file system, named Spider, is available on Rhea for computational work. With over 26,000 clients and (32) PB of disk space, it is one of the largest-scale Lustre file system in the world. A separate, NFS-based file system provides $HOME storage areas, and an HPSS-based file system provides Rhea users with archival spaces.


2. Requesting Access to OLCF Resources

(Back to Top)

Access to the computational resources of the Oak Ridge Leadership Facility (OLCF) is limited to approved users via project allocations. There are different kinds of projects, and the type of project request will determine the application and review procedure. Approved projects will be granted an allocation of hours for a period of time on one or more systems. Every user account at the OLCF must be associated with at least one allocation. Once an allocation has been approved and established, users can request to be added to the project allocation so they may run jobs against it.


2.1. Project Allocation Requests

(Back to Top)

The OLCF grants (3) different types of project allocations. The type of allocation you should request depends on a few different factors. The table below outlines the types of project allocations available at the OLCF and the some general policies that apply to each:

INCITE Director's Discretion ALCC
Allocations Large Small Large
Call for Proposals Once per year At any time Once per year
Closeout Report Required Required Required
Duration 1 year 1 year 1 year
Job Priority High Medium High
Quarterly Reports Required Required Required
Apply for INCITE Apply for DD Apply for ALCC
Project Type Details
INCITE – The Novel Computational Impact on Theory and Experiment (INCITE) program invites proposals for large-scale, computationally intensive research projects to run at the OLCF. The INCITE program awards sizeable allocations (typically, millions of processor-hours per project) on some of the world’s most powerful supercomputers to address grand challenges in science and engineering. There is an annual call for INCITE proposals and awards are made on an annual basis. For more information or to apply for an INCITE project, please visit the DOE INCITE page. ALCC – The ASCR Leadership Computing Challenge (ALCC) is open to scientists from the research community in national laboratories, academia and industry. The ALCC program allocates computational resources at the OLCF for special situations of interest to the Department with an emphasis on high-risk, high-payoff simulations in areas directly related to the Department’s energy mission in areas such as advancing the clean energy agenda and understanding the Earth’s climate, for national emergencies, or for broadening the community of researchers capable of using leadership computing resources. For more information or to submit a proposal, please visit the DOE ALCC page. DD – Director’s Discretion (DD) projects are dedicated to leadership computing preparation, INCITE and ALCC scaling, and application performance to maximize scientific application efficiency and productivity on leadership computing platforms. The OLCF Resource Utilization Council, as well as independent referees, review and approve all DD requests. Applications are accepted year round via the OLCF Director's Discretion Project Application page.
After Project Approval
Once a project is approved, an OLCF Accounts Manager will notify the PI, outlining the steps (listed below) necessary to create the project. If you have any questions, please feel free to contact the OLCF Accounts Team at accounts@ccs.ornl.gov. Steps for Activating a Project Once the Allocation is Approved
  1. A signed Principal Investigator’s PI Agreement must be submitted with the project application.
  2. Export Control: The project request will be reviewed by ORNL Export Control to determine whether sensitive or proprietary data will be generated or used. The results of this review will be forwarded to the PI. If the project request is deemed sensitive and/or proprietary, the OLCF Security Team will schedule a conference call with the PI to discuss the data protection needs.
  3. ORNL Personnel Access System (PAS): All PI’s are required to be entered into the ORNL PAS system. An OLCF Accounts Manager will send the PI a PAS invitation to submit all the pertinent information. Please note that processing a PAS request may take 15 or more days.
  4. User Agreement/Appendix A or Subcontract: A User Agreement/Appendix A or Subcontract must be executed between UT-Battelle and the PI’s institution. If our records indicate this requirement has not been met, all necessary documents will be provided to the applicant by an OLCF Accounts Manager.
Upon completion of the above steps, the PI will be notified that the project has been created and provided with the Project ID and system allocation. At this time, project participants may apply for an account via the OLCF User Account Application page.


2.2. User Account Requests

(Back to Top)

Users can apply for an account on existing projects. There are several steps in applying for an account; OLCF User Assistance can help you through the process. If you have any questions, please feel free to contact the Accounts Team at accounts@ccs.ornl.gov.

Steps to Obtain a User Account
  1. Apply for an account using the Account Request Form.
  2. The principal investigator (PI) of the project must approve your account and system access. The Accounts Team will contact the PI for this approval.
  3. If you have or will receive a RSA SecurID from our facility, additional paperwork will be sent to you via email to complete for identity proofing.
  4. Foreign national participants will be sent an Oak Ridge National Lab (ORNL) Personnel Access System (PAS) request specific for the facility and cyber-only access. After receiving your response, it takes between (2) to (5) weeks for approval.
  5. Fully-executed User Agreements with each institution having participants are required. If our records indicate your institution needs to sign either a User Agreement and/or Appendix A, the form(s) along with instructions will be sent via email.
  6. If you are processing sensitive or proprietary data, additional paperwork is required and will be sent to you.
Your account will be created and you will be notified via email when all of the steps above are complete. To begin the process, visit the OLCF User Account Application page.


3. OLCF Help and Policies

(Back to Top)

The OLCF provides many tools to assist users, including direct hands-on assistance by trained consultants. Means of assistance at the OLCF include:

  • The OLCF User Assistance Center (UAC), where consultants answer your questions directly via email or phone.
  • Various OLCF communications, which provide status updates of relevance to end-users.
  • The My OLCF site, which provides a mechanism for viewing project allocation reports.
  • The OLCF Policy Guide, which details accepted use of our computational resources.
  • Upcoming and historical OLCF Training Events, both in-person and web-based, that cover topics of interest to end-users.


3.1. User Assistance Center

(Back to Top)

The OLCF User Assistance Center (UAC) provides direct support to users of our computational resources.

Hours
The center’s normal support hours are 9am EST to 5pm EST Monday through Friday, exclusive of holidays.
Contact Us
Email help@olcf.ornl.gov
Phone: 865-241-6536
Fax: 865-241-4011
Address: 1 Bethel Valley Road, Oak Ridge, TN 37831
The OLCF UAC is located at the Oak Ridge National Laboratory (ORNL) in Building 5600, Room C103.
After Hours
Outside of normal business hours, calls are directed to the ORNL Computer Operations staff. If you require immediate assistance, you may contact them at the phone number listed above. If your request is not urgent, you may send an email to help@olcf.ornl.gov, where it will be answered by a OLCF User Assistance member the next business day.
Ticket Submission Webform
In lieu of sending email, you can also use the Ticket Submission Web Form to submit a request directly to OLCF User Assistance.


3.2. Communications to Users

(Back to Top)

The OLCF provides users with several ways of staying informed.

OLCF Announcements Mailing Lists
These mailing lists provides users with email messages of general interest (system upgrades, long-term outages, etc.) Since the mailing frequency is low and the information sent is important to all users, users are automatically subscribed to these lists as applicable when an account is set up.
OLCF "Notice" Mailing Lists
The OLCF also utilizes high volume mail lists to automatically announce system state changes as well as other notable system events. Users who are actively using a system are automatically added to a system's mail list. When a system changes state (up to down or down to up), an automated email is sent to members of the system's notice list. We also send additional notable issues and time sensitive events to the list.
Available Lists
titan-notice rhea-notice eos-notice spider-notice
Users can request to be permanently added or removed from a list by contacting the OLCF User Assistance Center.
Weekly Update
Each week, typically on Friday afternoon, an email announcing the next week’s scheduled outages is sent to all users. This message also includes meeting announcements and other items of interest to all OLCF users. If you are an OLCF user but are not receiving this weekly message, please contact the OLCF User Assistance Center.
System Status Pages
The OLCF Main Support page shows the current up/down status of selected OLCF systems at the top.
Twitter
The OLCF posts messages of interest on the OLCF Twitter Feed. We also post tweets specific to system outages on the OLCF Status Twitter Feed.
Message of the Day
In addition to other methods of notification, the system "Message of the Day" (MOTD) that is echoed upon login shows recent system outages. Important announcements are also posted to the MOTD. Users are encouraged to take a look at the MOTD upon login to see if there are any important notices.


3.3. My OLCF Site

(Back to Top)

To assist users in managing project allocations, we provide end-users with My OLCF, a web application with valuable information about OLCF projects and allocations on a per user basis. Users must login to the site with their OLCF username and SecurID fob: https://users.nccs.gov Detailed metrics for users and projects can be found in each project's usage section:

  • YTD usage by system, subproject, and project member
  • Monthly usage by system, subproject, and project member
  • YTD usage by job size groupings for each system, subproject, and project member
  • Weekly usage by job size groupings for each system, and subproject
  • Batch system priorities by project and subproject
  • Project members


3.4. Special Requests and Policy Exemptions

(Back to Top)

Users can request policy exemptions by submitting the appropriate web form available on the OLCF Documents and Forms page. Special requests forms allow a user to:

  • Request Software installations
  • Request relaxed queue limits for a job
  • Request a system reservation
  • Request a disk quota increase
  • Request a User Work area purge exemption
Special requests are reviewed weekly and approved or denied by management via the OLCF Resource Utilization Council.


3.5. OLCF Acknowledgement

(Back to Top)

Users should acknowledge the OLCF in all publications and presentations that speak to work performed on OLCF resources:

This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.


4. Accessing OLCF Systems

(Back to Top)

This section covers the basic procedures for accessing OLCF computational resources. To avoid risks associated with using plain-text communication, the only supported remote client on OLCF systems is a secure shell (SSH) client, which encrypts the entire session between OLCF systems and the client system.

Note: To access OLCF systems, your SSH client must support SSH protocol version 2 (this is common) and allow keyboard-interactive authentication.
For UNIX-based SSH clients, the following line should be in either the default ssh_config file or your $HOME/.ssh/config file:
PreferredAuthentications keyboard-interactive,password
The line may also contain other authentication methods, but keyboard-interactive must be included. SSH clients are also available for Windows-based systems, such as SecureCRT published by Van Dyke Software. For recent SecureCRT versions, the preferred authentications change above can be made through the "connection properties" menu.


4.1. OLCF System Hostnames

(Back to Top)

Each OLCF system has a single, designated hostname for general user-initiated user connections. Sometimes this is a load-balancing mechanism that will send users to other hosts as needed. In any case, the designated OLCF host names for general user connections are as follows:

System Name Hostname RSA Key Fingerprint
Titan titan.ccs.ornl.gov 77:dd:c9:2c:65:2f:c3:89:d6:24:a6:57:26:b5:9b:b7
Rhea rhea.ccs.ornl.gov 9a:72:79:cf:9e:47:33:d1:91:dd:4d:4e:e4:de:25:33
Eos eos.ccs.ornl.gov e3:ae:eb:12:0d:b1:4c:0b:6e:53:40:5c:e7:8a:0d:19
Everest everest.ccs.ornl.gov cc:6e:ef:84:7e:7c:dc:72:71:7b:76:7f:f3:46:57:2b
Sith sith.ccs.ornl.gov 28:63:5e:41:32:39:c2:ec:9b:63:e0:86:16:2f:e4:bd
Data Transfer Nodes dtn.ccs.ornl.gov b3:31:ac:44:83:2b:ce:37:cc:23:f4:be:7a:40:83:85
Home (machine) home.ccs.ornl.gov ba:12:46:8d:23:e7:4d:37:92:39:94:82:91:ea:3d:e9
For example, to connect to Titan from a UNIX-based system, use the following:
$ ssh userid@titan.ccs.ornl.gov


4.2. General-Purpose Systems

(Back to Top)

After a user account has been approved and created, the requesting user will be sent an email listing the system(s) to which the user requested and been given access. In addition to the system(s) listed in the email, all users also have access to the following general-purpose systems:

home.ccs.ornl.gov
Home is a general purpose system that can be used to log into other OLCF systems that are not directly accessible from outside the OLCF network. For example, running the screen or tmux utility is one common use of Home. Compiling, data transfer, or executing long-running or memory-intensive tasks should never be performed on Home. More information can be found on the The Home Login Host page.
dtn.ccs.ornl.gov
The Data Transfer Nodes are hosts specifically designed to provide optimized data transfer between OLCF systems and systems outside of the OLCF network. More information can be found on the Employing Data Transfer Nodes page.
HPSS
The High Performance Storage System (HPSS) provides tape storage for large amounts of data created on OLCF systems. The HPSS can be accessed from any OLCF system through the hsi utility. More information can be found on the HPSS page.


4.3. X11 Forwarding

(Back to Top)

Automatic forwarding of the X11 display to a remote computer is possible with the use of SSH and a local X server. To set up automatic X11 forwarding within SSH, you can do (1) of the following:

  • Invoke ssh on the command line with:
    $ ssh -X hostname
    Note that use of the -x option (lowercase) will disable X11 forwarding.
  • Edit (or create) your $HOME/.ssh/config file to include the following line:
    ForwardX11 yes
All X11 data will go through an encrypted channel. The $DISPLAY environment variable set by SSH will point to the remote machine with a port number greater than zero. This is normal, and happens because SSH creates a proxy X server on the remote machine for forwarding the connections over an encrypted channel. The connection to the real X server will be made from the local machine.
Warning: Users should not manually set the $DISPLAY environment variable for X11 forwarding; a non-encrypted channel may be used in this case.


4.4. RSA Key Fingerprints

(Back to Top)

Occasionally, you may receive an error message upon logging in to a system such as the following:

@@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
This can be a result of normal system maintenance that results in a changed RSA public key, or could be an actual security incident. If the RSA fingerprint displayed by your SSH client does not match the OLCF-authorized RSA fingerprint for the machine you are accessing, do not continue authentication; instead, contact help@olcf.ornl.gov.


4.5. Authenticating to OLCF Systems

(Back to Top)

All OLCF systems currently employ two-factor authentication only. To login to OLCF systems, an RSA SecurID® key fob is required. Image of an RSA SecudID fob

Activating a new SecurID® fob
  1. Initiate an SSH connection to username@home.ccs.ornl.gov.
  2. When prompted for a PASSCODE, enter the 6-digit code shown on the fob.
  3. You will be asked if you are ready to set your PIN. Answer with "Y".
  4. You will be prompted to enter a PIN. Enter a (4) to (6) digit number you can remember. You will then be prompted to re-enter your PIN.
  5. You will then be prompted to wait until the next code appears on your fob and to enter your PASSCODE. When the (6) digits on your fob change, enter your PIN digits followed immediately by the new (6) digits displayed on your fob. Note that any set of (6) digits on the fob can only be "used" once.
  6. Your PIN is now set, and your fob is activated and ready for use.
Using a SecurID® fob
When prompted for your PASSCODE, enter your PIN digits followed immediately by the (6) digits shown on your SecurID® fob. For example, if your pin is 1234 and the (6) digits on the fob are 000987, enter 1234000987 when you are prompted for a PASSCODE.
Warning: The 6-digit code displayed on the SecurID fob can only be used once. If prompted for multiple PASSCODE entries, always allow the 6-digit code to change between entries. Re-using the 6-digit code can cause your account to be automatically disabled.


5. Data Management

(Back to Top)

OLCF users have many options for data storage. Each user has a series of user-affiliated storage spaces, and each project has a series of project-affiliated storage spaces where data can be shared for collaboration. The storage areas are mounted across all OLCF systems, making your data available to you from multiple locations.

A Storage Area for Every Activity
The storage area to use in any given situation depends upon the activity you wish to carry out. Each User has a User Home area on a Network File System (NFS) and a User Archive area on the archival High Performance Storage System (HPSS). User storage areas are intended to house user-specific files. Individual Projects have a Project Home area on NFS, multiple Project Work areas on Lustre, and a Project Archive area on HPSS. Project storage areas are intended to house project-centric files.
Simple Guidelines
The following sections contain a description of all available storage areas and relevant details for each. If you're the impatient type, you can probably get right to work by adhering to the following simple guidelines:
If you need to store... then use... at path...
Long-term data for routine access that is unrelated to a project User Home $HOME
Long-term data for archival access that is unrelated to a project User Archive /home/$USER
Long-term project data for routine access that's shared with other project members Project Home /ccs/proj/[projid]
Short-term project data for fast, batch-job access that you don't want to share Member Work $MEMBERWORK/[projid]
Short-term project data for fast, batch-job access that's shared with other project members Project Work $PROJWORK/[projid]
Short-term project data for fast, batch-job access that's shared with those outside your project World Work $WORLDWORK/[projid]
Long-term project data for archival access that's shared with other project members Project Archive /proj/[projid]


5.1. User-Centric Data Storage

(Back to Top)

Users are provided with several storage areas, each of which serve different purposes. These areas are intended for storage of data for a particular user and not for storage of project data. The following table summarizes user-centric storage areas available on OLCF resources and lists relevant polices.

User-Centric Storage Areas
Area Path Type Permissions Quota Backups Purged Retention
User Home $HOME NFS User-controlled 10 GB Yes No 90 days
User Archive /home/$USER HPSS User-controlled 2 TB [1] No No 90 days
[1] In addition, there is a quota/limit of 2,000 files on this directory.


5.1.1. User Home Directories (NFS)

(Back to Top)

Each user is provided a home directory to store frequently used items such as source code, binaries, and scripts.

User Home Path
Home directories are located in a Network File Service (NFS) that is accessible from all OLCF resources as /ccs/home/$USER. The environment variable $HOME will always point to your current home directory. It is recommended, where possible, that you use this variable to reference your home directory. In cases in which using $HOME is not feasible, it is recommended that you use /ccs/home/$USER. Users should note that since this is an NFS-mounted filesystem, its performance will not be as high as other filesystems.
User Home Quotas
Quotas are enforced on user home directories. To request an increased quota, contact the OLCF User Assistance Center. To view your current quota and usage, use the quota command:
$ quota -Qs
Disk quotas for user usrid (uid 12345):
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
nccsfiler1a.ccs.ornl.gov:/vol/home
                  4858M   5000M   5000M           29379   4295m   4295m
User Home Backups
If you accidentally delete files from your home directory, you may be able to retrieve them. Online backups are performed at regular intervals. Hourly backups for the past 24 hours, daily backups for the last 7 days, and 1 weekly backup are available. It is possible that the deleted files are available in one of those backups. The backup directories are named hourly.*, daily.* , and weekly.* where * is the date/time stamp of the backup. For example, hourly.2016-12-01-0905 is an hourly backup made on December 1, 2016 at 9:05 AM. The backups are accessed via the .snapshot subdirectory. Note that if you do an ls (even with the -a option) of any directory you won’t see a .snapshot subdirectory, but you’ll be able to do “ls .snapshot” nonetheless. This will show you the hourly/daily/weekly backups available. The .snapshot feature is available in any subdirectory of your home directory and will show the online backup of that subdirectory. In other words, you don’t have to start at /ccs/home/$USER and navigate the full directory structure; if you’re in a /ccs/home subdirectory several “levels” deep, an “ls .snapshot” will access the available backups of that subdirectory.
User Home Permissions
The default permissions for user home directories are 0750 (full access to the user, read and execute for the group). Users have the ability to change permissions on their home directories, although it is recommended that permissions be set to as restrictive as possible (without interfering with your work).
Special User Website Directory
User Home spaces may contain a directory named /www. If this directory exists, and if appropriate permissions exist, files in that directory will be accessible via the World Wide Web at http://users.nccs.gov/~user (where user is your userid).


5.1.2. User Archive Directories (HPSS)

(Back to Top)

Users are also provided with user-centric archival space on the High Performance Storage System (HPSS). User archive areas on HPSS are intended for storage of data not immediately needed in either User Home directories (NFS) or User Work directories (Lustre®). User Archive areas also serve as a location for users to store backup copies of user files. User Archive directories should not be used to store project-related data. Rather, Project Archive directories should be used for project data.

User Archive Path
User archive directories are located at /home/$USER.
User Archive Access
User archive directories may be accessed only via specialized tools called HSI and HTAR. For more information on using HSI or HTAR, see the HSI and HTAR page.
User Archive Accounting
Each file and directory on HPSS is associated with an HPSS storage allocation. For information on storage allocation, please visit the Understanding HPSS Storage Allocations page.


5.2. Project-Centric Data Storage

(Back to Top)

Projects are provided with several storage areas for the data they need. Project directories provide members of a project with a common place to store code, data files, documentation, and other files related to their project. While this information could be stored in one or more user directories, storing in a project directory provides a common location to gather all files. The following table summarizes project-centric storage areas available on OLCF resources and lists relevant policies.

Project-Centric Storage Areas
Area Path Type Permissions Quota Backups Purged Retention
Project Home /ccs/proj/[projid] NFS 770 50 GB Yes No 90 days
Member Work $MEMBERWORK/[projid] Lustre® 700 [1] 10 TB No 14 days 14 days
Project Work $PROJWORK/[projid] Lustre® 770 100 TB No 90 days 90 days
World Work $WORLDWORK/[projid] Lustre® 775 10 TB No 90 days 90 days
Project Archive /proj/[projid] HPSS 770 100 TB [2] No No 90 days
Important! Files within "Work" directories (i.e., Member Work, Project Work, World Work) are not backed up and are purged on a regular basis according to the timeframes listed above.

[1] Permissions on Member Work directories can be controlled to an extent by project members. By default, only the project member has any accesses, but accesses can be granted to other project members by setting group permissions accordingly on the Member Work directory. The parent directory of the Member Work directory prevents accesses by "UNIX-others" and cannot be changed (security measures).

[2] In addition, there is a quota/limit of 100,000 files on this directory.


5.2.1. Project Home Directories (NFS)

(Back to Top)

Projects are provided with a Project Home storage area in the Network File Service (NFS) mounted filesystem. This area is intended for storage of data, code, and other files that are of interest to all members of a project. Since Project Home is an NFS-mounted filesystem, its performance will not be as high as other filesystems.

Project Home Path
Project Home area is accessible at /ccs/proj/abc123 (where abc123 is your project ID).
Project Home Quotas
To check your project's current usage, run df -h /ccs/proj/abc123 (where abc123 is your project ID). Quotas are enforced on project home directories. The current limit is shown on the Storage Policy page. To request an increased quota, contact the User Assistance Center.
Project Home Backups
If you accidentally delete files from your project home directory, you may be able to retrieve them. Online backups are performed at regular intervals. Hourly backups for the past 24 hours, daily backups for the last 7 days, and 1 weekly backup are available. It is possible that the deleted files are available in one of those backups. The backup directories are named hourly.*, daily.* , and weekly.* where * is the date/time stamp of the backup. For example, hourly.2016-12-01-0905 is an hourly backup made on December 1, 2016 at 9:05 AM. The backups are accessed via the .snapshot subdirectory. Note that if you do an ls (even with the -a option) of any directory you won’t see a .snapshot subdirectory, but you’ll be able to do “ls .snapshot” nonetheless. This will show you the hourly/daily/weekly backups available. The .snapshot feature is available in any subdirectory of your project home directory and will show the online backup of that subdirectory. In other words, you don’t have to start at /ccs/proj/abc123 and navigate the full directory structure; if you’re in a /ccs/proj subdirectory several “levels” deep, an “ls .snapshot” will access the available backups of that subdirectory.
Project Home Permissions
The default permissions for project home directories are 0770 (full access to the user and group). The directory is owned by root and the group is the project's group. All members of a project should also be members of that group-specific project. For example, all members of project "ABC123" should be members of the "abc123" UNIX group.


5.2.2. Project-Centric Work Directories

(Back to Top)

To provide projects and project members with high-performance storage areas that are accessible to batch jobs, projects are given (3) distinct project-centric work (i.e., scratch) storage areas within Spider, the OLCF's center-wide Lustre® filesystem.

Three Project Work Areas to Facilitate Collaboration
To facilitate collaboration among researchers, the OLCF provides (3) distinct types of project-centric work storage areas: Member Work directories, Project Work directories, and World Work directories. Each directory should be used for storing files generated by computationally-intensive HPC jobs related to a project. The difference between the three lies in the accessibility of the data to project members and to researchers outside of the project. Member Work directories are accessible only by an individual project member by default. Project Work directories are accessible by all project members. World Work directories are readable by any user on the system.
Paths
Paths to the various project-centric work storage areas are simplified by the use of environment variables that point to the proper directory on a per-user basis:
  • Member Work Directory: $MEMBERWORK/[projid]
  • Project Work Directory: $PROJWORK/[projid]
  • World Work Directory: $WORLDWORK/[projid]
Environment variables provide operational staff (aka "us") flexibility in the exact implementation of underlying directory paths, and provide researchers (aka "you") with consistency over the long-term. For these reasons, we highly recommend the use of these environment variables for all scripted commands involving work directories.
Permissions
UNIX Permissions on each project-centric work storage area differ according to the area's intended collaborative use. Under this setup, the process of sharing data with other researchers amounts to simply ensuring that the data resides in the proper work directory.
  • Member Work Directory: 700
  • Project Work Directory: 770
  • World Work Directory: 775
For example, if you have data that must be restricted only to yourself, keep them in your Member Work directory for that project (and leave the default permissions unchanged). If you have data that you intend to share with researchers within your project, keep them in the project's Project Work directory. If you have data that you intend to share with researchers outside of a project, keep them in the project's World Work directory.
Quotas
Soft quotas are enforced on project-centric work directories. The current limit is shown on the Storage Policy page. To request an increased quota, contact the User Assistance Center.
Backups
Member Work, Project Work, and World Work directories are not backed up. Project members are responsible for backing up these files, either to Project Archive areas (HPSS) or to an off-site location.


5.2.3. Project Archive Directories (HPSS)

(Back to Top)

Projects are also allocated project-specific archival space on the High Performance Storage System (HPSS). The default quota is shown on the Storage Policy page. If a higher quota is needed, contact the User Assistance Center. The Project Archive space on HPSS is intended for storage of data not immediately needed in either Project Home (NFS) areas nor Project Work (Lustre®) areas, and to serve as a location to store backup copies of project-related files.

Project Archive Path

The project archive directories are located at /proj/pjt000 (where pjt000 is your Project ID).

Project Archive Access

Project Archive directories may only be accessed via utilities called HSI and HTAR. For more information on using HSI or HTAR, see the HSI and HTAR page.

Project Archive Accounting

Each file and directory on HPSS is associated with an HPSS storage allocation. For information on HPSS storage allocations, please visit the Understanding HPSS Storage Allocations page.


5.3. Data Transfer with Rhea

(Back to Top)

Data Transfer Methods for Rhea

Rhea mounts the User Home and Spider Atlas shared center-wide filesystems. Data transferred by any means to these filesystems will be available on Rhea. It is strongly recommended that large data transfers use Globus GridFTP, scp or bbcp from a Data Transfer Node to transfer files to Rhea.

Using SCP

The scp utility is useful when transferring small amounts of data. The following examples show how to transfer a file named data.tar to a Rhea-mounted filesystem.

The following scp command was executed from a non-OLCF resource to copy a file to the $MEMBERWORK directory on Atlas:

$ scp /path/to/data.tar [userid]@dtn.ccs.ornl.gov:/lustre/atlas/scratch/[userid]/[projid]/data-on-Atlas.tar
where [userid] and [projid] are replaced with your OLCF username and project. The directories under $PROJWORK and $WORLDWORK can be written to using their fully-expanded paths /lustre/atlas/proj-shared and /lustre/atlas/world-shared respectively. More information on the scp utility can be found at the SFTP/SCP page.
Using CP

For User Home areas, Rhea mounts the same $HOME directories that the other OLCF systems mount, which means that the files you see in /ccs/home/$USER on Titan or the Data Transfer Nodes are also present on Rhea at the same location.

For your work directories, a change is needed to run on Rhea. Let's assume that your workflow for running your application involves copying files from your User Home area (i.e. /ccs/home/$USER) to your Atlas Work area (i.e. $MEMBERWORK). To do this on Rhea, you would need to copy those files to, for instance, your Member Work area at $MEMBERWORK/[projid]/subpath, where [projid] is the identifier for one of the projects associated with your OLCF account.

Parallel File Transfer

Large data transfers will achieve the best performance under a parallel file copy utility such as Globus GridFTP or bbcp.

Using HSI

The HPSS archival filesystem can be used to get and put files on Atlas. This is only an efficient option if the file you want to transfer is already on the HPSS. To move the file Inital_conditions.tar from HPSS to Rhea, assuming that the current working directory is your Member Work area:

$ hsi get Initial_conditions.tar

More information on using hsi can be found on the HSI page.


5.4. Data Management Policy Summary

(Back to Top)

Users must agree to the full Data Management Policy as part of their account application. The "Data Retention, Purge, & Quotas" section is useful and is summarized below.

Data Retention, Purge, & Quota Summary
User-Centric Storage Areas
Area Path Type Permissions Quota Backups Purged Retention
User Home $HOME NFS User-controlled 10 GB Yes No 90 days
User Archive /home/$USER HPSS User-controlled 2 TB [1] No No 90 days
Project-Centric Storage Areas
Area Path Type Permissions Quota Backups Purged Retention
Project Home /ccs/proj/[projid] NFS 770 50 GB Yes No 90 days
Member Work $MEMBERWORK/[projid] Lustre® 700 [2] 10 TB No 14 days 14 days
Project Work $PROJWORK/[projid] Lustre® 770 100 TB No 90 days 90 days
World Work $WORLDWORK/[projid] Lustre® 775 10 TB No 90 days 90 days
Project Archive /proj/[projid] HPSS 770 100 TB [3] No No 90 days
Area The general name of storage area.
Path The path (symlink) to the storage area's directory.
Type The underlying software technology supporting the storage area.
Permissions UNIX Permissions enforced on the storage area's top-level directory.
Quota The limits placed on total number of bytes and/or files in the storage area.
Backups States if the data is automatically duplicated for disaster recovery purposes.
Purged Period of time, post-file-creation, after which a file will be marked as eligible for permanent deletion.
Retention Period of time, post-account-deactivation or post-project-end, after which data will be marked as eligible for permanent deletion.
Important! Files within "Work" directories (i.e., Member Work, Project Work, World Work) are not backed up and are purged on a regular basis according to the timeframes listed above.

[1] In addition, there is a quota/limit of 2,000 files on this directory.

[2] Permissions on Member Work directories can be controlled to an extent by project members. By default, only the project member has any accesses, but accesses can be granted to other project members by setting group permissions accordingly on the Member Work directory. The parent directory of the Member Work directory prevents accesses by "UNIX-others" and cannot be changed (security measures).

[3] In addition, there is a quota/limit of 100,000 files on this directory.


6. Software and Shell Environments

(Back to Top)

The OLCF provides hundreds of pre-installed software packages and scientific libraries for your use, in addition to taking software requests. Due to the large number of software packages and versions on OLCF resources, environment management tools are needed to handle changes to your shell environment. This chapter discusses how to manage your shell and software environment on OLCF systems.


6.1. Default Shell

(Back to Top)

Users request their preferred shell on their initial user account request form. The default shell is enforced across all OLCF resources. The OLCF currently supports the following shells:

  • bash
  • tsch
  • csh
  • ksh
Please contact the OLCF User Assistance Center to request a different default shell.


6.2. Using Modules

(Back to Top)

The modules software package allows you to dynamically modify your user environment by using pre-written modulefiles.

Modules Overview
Each modulefile contains the information needed to configure the shell for an application. After the modules software package is initialized, the environment can be modified on a per-module basis using the module command, which interprets a modulefile. Typically, a modulefile instructs the module command to alter or set shell environment variables such as PATH or MANPATH. Modulefiles can be shared by many users on a system, and users can have their own personal collection to supplement and/or replace the shared modulefiles. As a user, you can add and remove modulefiles from your current shell environment. The environment changes performed by a modulefile can be viewed by using the module command as well. More information on modules can be found by running man module on OLCF systems.
Summary of Module Commands
Command Description
module list Lists modules currently loaded in a user’s environment
module avail Lists all available modules on a system in condensed format
module avail -l Lists all available modules on a system in long format
module display Shows environment changes that will be made by loading a given module
module load Loads a module
module unload Unloads a module
module help Shows help for a module
module swap Swaps a currently loaded module for an unloaded module
Re-initializing the Module Command
Modules software functionality is highly dependent upon the shell environment being used. Sometimes when switching between shells, modules must be re-initialized. For example, you might see an error such as the following:
$ module list
-bash: module: command not found
To fix this, just re-initialize your modules environment:
$ source $MODULESHOME/init/myshell
Where myshell is the name of the shell you are using and need to re-initialize.
Examples of Module Use
To show all available modules on a system:
$ module avail   
------------ /opt/cray/modulefiles ------------
atp/1.3.0                          netcdf/4.1.3                       tpsl/1.0.01
atp/1.4.0(default)                 netcdf-hdf5parallel/4.1.2(default) tpsl/1.1.01(default)
atp/1.4.1                          netcdf-hdf5parallel/4.1.3          trilinos/10.6.4.0(default)
...
To search for availability of a module by name:
$ module avail -l netcdf
- Package -----------------------------+- Versions -+- Last mod. ------
/opt/modulefiles:
netcdf/3.6.2                                         2009/09/29 16:38:25
/sw/xk6/modulefiles:
netcdf/3.6.2                                         2011/12/09 18:07:31
netcdf/4.1.3                              default    2011/12/12 20:43:37
...
To show the modulefiles currently in use (loaded) by the user:
$ module list
Currently Loaded Modulefiles:
  1) modules/3.2.6.6                           12) pmi/3.0.0-1.0000.8661.28.2807.gem
  2) xe-sysroot/4.0.30.securitypatch.20110928  13) ugni/2.3-1.0400.3912.4.29.gem
  3) xtpe-network-gemini                       14) udreg/2.3.1-1.0400.3911.5.6.gem
To show detailed help info on a modulefile:
$ module help netcdf/4.1.3 
------------ Module Specific Help for 'netcdf/4.1.3' ------------
Purpose:
  New version of hdf5 1.8.7 and netcdf 4.1.3
Product and OS Dependencies:
  hdf5_netcdf 2.1 requires SLES 11 systems and was tested on Cray XE and
...
To show what a modulefile will do to the shell environment if loaded:
$ module display netcdf/4.1.3
------------
/opt/cray/modulefiles/netcdf/4.1.3:
setenv           CRAY_NETCDF_VERSION 4.1.3 
prepend-path     PATH /opt/cray/netcdf/4.1.3/gnu/45/bin 
...
To load or unload a modulefile
$ module load netcdf/4.1.3
$ module unload netcdf/4.1.3
To unload a modulefile and load a different one:
$ module swap netcdf/4.1.3 netcdf/4.1.2 


6.3. Installed Software

(Back to Top)

The OLCF provides hundreds of pre-installed software packages and scientific libraries for your use, in addition to taking software installation requests. See the software section for complete details on existing installs. To request a new software install, use the software installation request form.


7. Compiling on Rhea

(Back to Top)

Compiling code on Rhea is typical of commodity or beowulf-style HPC linux clusters.

Available Compilers
The following compilers are available on Rhea:
  • Intel, Intel Composer XE (default)
  • PGI, the Portland Group Compiler Suite
  • GCC, the GNU Compiler Collection


7.1. Controlling the Programming Environment on Commodity Clusters

(Back to Top)

Upon login, default versions of the Intel compiler and OpenMPI (Message Passing Interface) libraries are added to each user's environment through a programming environment (PE) module. Users do not need to make any environment changes to use the default version of Intel and OpenMPI.

Changing Compilers
If a different compiler is required, it is important to use the correct environment for each compiler. To aid users in pairing the correct compiler and environment, programming environment modules are provided. The programming environment modules will load the correct pairing of compiler version, message passing libraries, and other items required to build and run code. We highly recommend that the programming environment modules be used when changing compiler vendors. The following programming environment modules are available on OLCF commodity clusters:
  • PE-intel
  • PE-pgi
  • PE-gnu
To change the default loaded Intel environment to the GCC environment use:
$ module unload PE-intel 
$ module load PE-gnu
Or alternatively:
$ module swap PE-intel PE-gnu
Changing Versions of the Same Compiler
To use a specific compiler version, you must first ensure the compiler's PE module is loaded, and then swap to the correct compiler version. For example, the following will configure the environment to use the GCC compilers, then load a non-default GCC compiler version:
$ module swap PE-intel PE-gnu
$ module swap gcc gcc/4.6.1
General Programming Environment Guidelines
We recommend the following general guidelines for using the programming environment modules:
  • Do not purge all modules; rather, use the default module environment provided at the time of login, and modify it.
  • Do not swap moab, torque, or MySQL modules after loading a programming environment modulefile.


7.2. Compilers on Commodity Clusters

(Back to Top)

Commodity Clusters at the OLCF can be accessed via the following wrapper programs:

  • mpicc to invoke the C compiler
  • mpiCC, mpicxx, or mpic++ to invoke the C++ compiler
  • mpif77 or mpif90 to invoke appropriate versions of the Fortran compiler
These wrapper programs are cognizant of your currently loaded modules, and will ensure that your code links against our OpenMPI installation. More information about using OpenMPI at our center can be found in our Software Documentation.


8. Running Jobs on Commodity Clusters

(Back to Top)

In High Performance Computing (HPC), computational work is performed by jobs. Individual jobs produce data that lend relevant insight into grand challenges in science and engineering. As such, the timely, efficient execution of jobs is the primary concern in the operation of any HPC system. A job on a commodity cluster typically comprises a few different components:

  • A batch submission script.
  • A binary executable.
  • A set of input files for the executable.
  • A set of output files created by the executable.
And the process for running a job, in general, is to:
  1. Prepare executables and input files.
  2. Write a batch script.
  3. Submit the batch script to the batch scheduler.
  4. Optionally monitor the job before and during execution.
The following sections describe in detail how to create, submit, and manage jobs for execution on commodity clusters.


8.1. Login vs Compute Nodes on Commodity Clusters

(Back to Top)

Login Nodes
When you log into an OLCF cluster, you are placed on a login node. Login node resources are shared by all users of the system. Because of this, users should be mindful when performing tasks on a login node. Login nodes should be used for basic tasks such as file editing, code compilation, data backup, and job submission. Login nodes should not be used for memory or processing intensive tasks. Users should also limit the number of simultaneous tasks performed on the login resources. For example, a user should not run (10) simultaneous tar processes on a login node.
Warning: Processor-intensive, memory-intensive, or otherwise disruptive processes running on login nodes may be killed without warning.
Compute Nodes
Memory and processor intensive tasks as well as production work should be performed on a cluster's compute nodes. Access to compute nodes is managed by the cluster's batch scheduling system (e.g., Torque/MOAB). Rhea's compute nodes are separated into two partitions:
rhea
Jobs that do not specify a partition will run in the rhea partition
512 nodes each with 128GB memory.
gpu
To access the gpu partition, batch job submissions should request -lpartition=gpu
9 nodes each with 1TB memory and 2 K80 GPUs


8.2. Writing Batch Scripts for Commodity Clusters

(Back to Top)

Batch scripts are used to run a set of commands on a cluster's compute partition. The batch script is simply a shell script containing options to the batch scheduler software (e.g., PBS) followed by commands to be interpreted by a shell. The batch script is submitted to the batch scheduler software, PBS, where it is parsed. Based on the parsed data, PBS places the script in the queue as a batch job. Once the batch job makes its way through the queue, the script will be executed on the primry compute node of the allocated resources.

Components of a Batch Script
Batch scripts are parsed into the following (3) sections:
Interpreter Line
The first line of a script can be used to specify the script’s interpreter; this line is optional. If not used, the submitter’s default shell will be used. The line uses the hash-bang syntax, i.e., #!/path/to/shell.
PBS Submission Options
The PBS submission options are preceded by the string#PBS, making them appear as comments to a shell. PBS will look for #PBS options in a batch script from the script’s first line through the first non-comment line. A comment line begins with #. #PBS options entered after the first non-comment line will not be read by PBS.
Shell Commands
The shell commands follow the last #PBS option and represent the executable content of the batch job. If any #PBS lines follow executable statements, they will be treated as comments only. The exception to this rule is shell specification on the first line of the script. The execution section of a script will be interpreted by a shell and can contain multiple lines of executables, shell commands, and comments. Commands within this section will be executed on the batch job's primary compute node after the job has been allocated. During normal execution, the batch script will end and exit the queue after the last line of the script.
Example Batch Script
  1: #!/bin/bash
  2: #PBS -A XXXYYY
  3: #PBS -N test
  4: #PBS -j oe
  5: #PBS -l walltime=1:00:00,nodes=2
  6:
  7: cd $PBS_O_WORKDIR
  8: date
  9: mpirun -n 8 ./a.out
This batch script can be broken down into the following sections:
Interpreter Line
1: This line is optional and can be used to specify a shell to interpret the script.
PBS Options
2: The job will be charged to the “XXXYYY” project. 3: The job will be named test. 4: The job's standard output and error will be combined into one file. 5: The job will request (2) nodes for (1) hour.
Shell Commands
6: This line is left blank, so it will be ignored. 7: This command will change the current directory to the directory from where the script was submitted. 8: This command will run the date command. 9: This command will run the executable a.out on (8) cores via MPI. Batch scripts can be submitted for execution using the qsub command. For example, the following will submit the batch script named test.pbs:
  qsub test.pbs
If successfully submitted, a PBS job ID will be returned. This ID can be used to track the job. It is also helpful in troubleshooting a failed job,; make a note of the job ID for each of your jobs in case you must contact the OLCF User Assistance Center for support.
Note: For more batch script examples, please see the Batch Script Examples page.


8.3. Interactive Batch Jobs on Commodity Clusters

(Back to Top)

Batch scripts are useful when one has a pre-determined group of commands to execute, the results of which can be viewed at a later time. However, it is often necessary to run tasks on compute resources interactively. Users are not allowed to access cluster compute nodes directly from a login node. Instead, users must use an interactive batch job to allocate and gain access to compute resources. This is done by using the -I option to qsub. Other PBS options are passed to qsub on the command line as well:

  $ qsub -I -A abc123 -q qname -V -l nodes=4 -l walltime=30:00:00
This request will:
-I Start an interactive session
-A Charge to the abc123 project
-q qname Run in the qname queue
-V Export the user's shell environment to the job's environment
-l nodes=4 Request (4) nodes...
-l walltime=30:00:00 ...for (30) minutes
After running this command, the job will wait until enough compute nodes are available, just as any other batch job must. However, once the job starts, the user will be given an interactive prompt on the primary compute node within the allocated resource pool. Commands may then be executed directly (instead of through a batch script).
Using to Debug
A common use of interactive batch is to aid in debugging efforts. Interactive access to compute resources allows the ability to run a process to the point of failure; however, unlike a batch job, the process can be restarted after brief changes are made without losing the compute resource pool; thus speeding up the debugging effort.
Choosing a Job Size
Because interactive jobs must sit in the queue until enough resources become available to allocate, it is useful to base core selection on the number of currently unallocated cores (to shorten the queue wait time). Use the showbf command (i.e. "show backfill") to see resource limits that would allow your job to be immediately backfilled (and thus started) by the scheduler. For example, the snapshot below shows that (8) nodes are currently free.
  $ showbf

  Partition   Tasks  Nodes  StartOffset   Duration   StartDate
  ---------   -----  -----  ------------  ---------  --------------
  lens        4744   8      INFINITY      00:00:00   HH:MM:SS_MM/DD
See the output of the showbf –help command for additional options.


8.4. Common Batch Options to PBS

(Back to Top)

The following table summarizes frequently-used options to PBS:

Option Use Description
-A #PBS -A <account> Causes the job time to be charged to <account>. The account string, e.g. pjt000, is typically composed of three letters followed by three digits and optionally followed by a subproject identifier. The utility showproj can be used to list your valid assigned project ID(s). This option is required by all jobs.
-l #PBS -l nodes=<value> Maximum number of compute nodes. Jobs cannot request partial nodes.
#PBS -l walltime=<time> Maximum wall-clock time. <time> is in the format HH:MM:SS.
#PBS -l partition=<partition_name> Allocates resources on specified partition.
-o #PBS -o <filename> Writes standard output to <name> instead of <job script>.o$PBS_JOBID. $PBS_JOBID is an environment variable created by PBS that contains the PBS job identifier.
-e #PBS -e <filename> Writes standard error to <name> instead of <job script>.e$PBS_JOBID.
-j #PBS -j {oe,eo} Combines standard output and standard error into the standard error file (eo) or the standard out file (oe).
-m #PBS -m a Sends email to the submitter when the job aborts.
#PBS -m b Sends email to the submitter when the job begins.
#PBS -m e Sends email to the submitter when the job ends.
-M #PBS -M <address> Specifies email address to use for -m options.
-N #PBS -N <name> Sets the job name to <name> instead of the name of the job script.
-S #PBS -S <shell> Sets the shell to interpret the job script.
-q #PBS -q <queue> Directs the job to the specified queue.This option is not required to run in the default queue on any given system.
-V #PBS -V Exports all environment variables from the submitting shell into the batch job shell. Not Recommended Because the login nodes differ from the service nodes, using the '-V' option is not recommended. Users should create the needed environment within the batch job.
-X #PBS -X Enables X11 forwarding. The -X PBS option should be used to tunnel a GUI from an interactive batch job.
Note: Because the login nodes differ from the service nodes, using the '-V' option is not recommended. Users should create the needed environment within the batch job.
Further details and other PBS options may be found through the qsub man page.


8.5. Batch Environment Variables

(Back to Top)

PBS sets multiple environment variables at submission time. The following PBS variables are useful within batch scripts:

Variable Description
$PBS_O_WORKDIR The directory from which the batch job was submitted. By default, a new job starts in your home directory. You can get back to the directory of job submission with cd $PBS_O_WORKDIR. Note that this is not necessarily the same directory in which the batch script resides.
$PBS_JOBID The job’s full identifier. A common use for PBS_JOBID is to append the job’s ID to the standard output and error files.
$PBS_NUM_NODES The number of nodes requested.
$PBS_JOBNAME The job name supplied by the user.
$PBS_NODEFILE The name of the file containing the list of nodes assigned to the job. Used sometimes on non-Cray clusters.


8.6. Modifying Batch Jobs

(Back to Top)

The batch scheduler provides a number of utility commands for managing submitted jobs. See each utilities' man page for more information.

Removing and Holding Jobs
qdel
Jobs in the queue in any state can be stopped and removed from the queue using the command qdel.
$ qdel 1234
qhold
Jobs in the queue in a non-running state may be placed on hold using the qhold command. Jobs placed on hold will not be removed from the queue, but they will not be eligible for execution.
$ qhold 1234
qrls
Once on hold the job will not be eligible to run until it is released to return to a queued state. The qrls command can be used to remove a job from the held state.
$ qrls 1234
Modifying Job Attributes
qalter
Non-running jobs in the queue can be modified with the PBS qalter command. The qalter utility can be used to do the following (among others): Modify the job’s name:
$ qalter -N newname 130494
Modify the number of requested cores:
$ qalter -l nodes=12 130494
Modify the job’s walltime:
$ qalter -l walltime=01:00:00 130494
Note: Once a batch job moves into a running state, the job's walltime can not be increased.


8.7. Monitoring Batch Jobs

(Back to Top)

PBS and Moab provide multiple tools to view queue, system, and job status. Below are the most common and useful of these tools.

Job Monitoring Commands
showq
The Moab utility showq can be used to view a more detailed description of the queue. The utility will display the queue in the following states:
Active These jobs are currently running.
Eligible These jobs are currently queued awaiting resources. Eligible jobs are shown in the order in which the scheduler will consider them for allocation.
Blocked These jobs are currently queued but are not eligible to run. A job may be in this state because the user has more jobs that are "eligible to run" than the system's queue policy allows.
To see all jobs currently in the queue:
$ showq
To see all jobs owned by userA currently in the queue:
$ showq -u userA
To see all jobs submitted to partitionA:
$ showq -p partitionA
To see all completed jobs:
$ showq -c
Note: To increase response time, the MOAB utilities (showstart, checkjob) will display a cached result. The cache updates every 30 seconds. But, because the cached result is displayed, you may see the following message:
--------------------------------------------------------------------
NOTE: The following information has been cached by the remote server
      and may be slightly out of date.
--------------------------------------------------------------------
checkjob
The Moab utility checkjob can be used to view details of a job in the queue. For example, if job 736 is a job currently in the queue in a blocked state, the following can be used to view why the job is in a blocked state:
$ checkjob 736
The return may contain a line similar to the following:
BlockMsg: job 736 violates idle HARD MAXJOB limit of X for user (Req: 1 InUse: X)
This line indicates the job is in the blocked state because the owning user has reached the limit for jobs in the "eligible to run" state.
qstat
The PBS utility qstat will poll PBS (Torque) for job information. However, qstat does not know of Moab's blocked and eligible states. Because of this, the showq Moab utility (see above) will provide a more accurate batch queue state. To show show all queued jobs:
$ qstat -a
To show details about job 1234:
$ qstat -f 1234
To show all currently queued jobs owned by userA:
$ qstat -u userA


8.8. Batch Queues on Rhea

(Back to Top)

Rhea's compute nodes are separated into two partitions:

Default Rhea Partition
Jobs that do not specify a partition will run in the 512 node rhea partition.
BinNode CountDurationPolicy
A1 - 16 Nodes0 - 48 hr max 4 jobs running and 4 jobs eligible
per user
in bins A, B, and C
B17 - 64 Nodes0 - 36 hr
C65 - 384 Nodes0 - 3 hr
GPU Partition
To access the 9 node gpu partition, batch job submissions should request -lpartition=gpu
Node CountDurationPolicy
1-2 Nodes0 - 48 hrs max 1 job running
per user
The queue structure was designed based on user feedback and analysis of batch jobs over the recent years. However, we understand that the structure may not meet the needs of all users. If this structure limits your use of the system, please let us know. We want Rhea to be a useful OLCF resource and will work with you providing exceptions or even changing the queue structure if necessary.
Users wishing to submit jobs that fall outside the queue structure are encouraged to request a reservation via the Special Request Form.
Allocation Overuse Policy
Projects that overrun their allocation are still allowed to run on OLCF systems, although at a reduced priority. Like the adjustment for the number of processors requested above, this is an adjustment to the apparent submit time of the job. However, this adjustment has the effect of making jobs appear much younger than jobs submitted under projects that have not exceeded their allocation. In addition to the priority change, these jobs are also limited in the amount of wall time that can be used. For example, consider that job1 is submitted at the same time as job2. The project associated with job1 is over its allocation, while the project for job2 is not. The batch system will consider job2 to have been waiting for a longer time than job1. Also projects that are at 125% of their allocated time will be limited to only one running job at a time. The adjustment to the apparent submit time depends upon the percentage that the project is over its allocation, as shown in the table below:
% Of Allocation Used Priority Reduction number eligible-to-run number running
< 100% 0 days 4 jobs unlimited jobs
100% to 125% 30 days 4 jobs unlimited jobs
> 125% 365 days 4 jobs 1 job


8.9. Job Execution on Commodity Clusters

(Back to Top)

Once resources have been allocated through the batch system, users have the option of running commands on the allocated resources' primary compute node (a serial job) and/or running an MPI/OpenMP executable across all the resources in the allocated resource pool simultaneously (a parallel job).


8.9.1. Serial Job Execution on Commodity Clusters

(Back to Top)

The executable portion of batch scripts is interpreted by the shell specified on the first line of the script. If a shell is not specified, the submitting user’s default shell will be used. The serial portion of the batch script may contain comments, shell commands, executable scripts, and compiled executables. These can be used in combination to, for example, navigate file systems, set up job execution, run serial executables, and even submit other batch jobs.


8.9.2. Parallel Job Execution on Commodity Clusters

(Back to Top)

Using mpirun
By default, commands will executed on the job's primary compute node, sometimes referred to as the job's head node. The mpirun command is used to execute an MPI executable on one or more compute nodes in parallel. mpirun accepts the following common options:
--npernode Number of ranks per node
-n Total number of MPI ranks
--bind-to none Allow code to control thread affinity
--map-by ppr:N:node:pe=T Place N tasks per node leaving space for T threads
--map-by ppr:N:socket:pe=T Place N tasks per socket leaving space for T threads
--map-by ppr:N:socket Assign tasks by socket placing N tasks on each socket
--report-bindings Have MPI explain which ranks have been assigned to which nodes / physical cores
Note: If you do not specify the number of MPI tasks to mpirun via -n, the system will default to all available cores allocated to the job.
MPI Task Layout
Each compute node on Rhea contains two sockets each with 8 cores. Depending on your job, it may be useful to control task layout within and across nodes.
Default Layout: Sequential
The following will run a copy of a.out on two cores each on the same node:
$ mpirun -np 2 ./a.out
Compute Node
Socket 0 Socket 1
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
0 1
4 cores, 2 cores per socket, 1 node
The following will run a.out on 4 cores, 2 cores per socket, 1 node:
$ mpirun -np 4 --map-by ppr:2:socket ./a.out
Compute Node
Socket 0 Socket 1
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
0 1 2 3
4 cores, 1 core per socket, 2 nodes
The following will run a.out on 4 cores, 1 core per socket, 2 nodes. This can be useful if you need to spread your batch job over multiple nodes to allow each task access to more memory.
$ mpirun -np 4 --map-by ppr:1:socket ./a.out
Compute Node 0
Socket 0 Socket 1
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
0 1
Compute Node 1
Socket 0 Socket 1
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
2 3
The --report-bindings flag can be used to report task layout:
$ mpirun -np 4 --map-by ppr:1:socket --report-bindings hostname
[rhea2:47176] MCW rank 0 bound to socket 0[core 0[hwt 0-1]]: [BB/../../../../../../..][../../../../../../../..]
[rhea2:47176] MCW rank 1 bound to socket 1[core 8[hwt 0-1]]: [../../../../../../../..][BB/../../../../../../..]
[rhea4:104150] MCW rank 2 bound to socket 0[core 0[hwt 0-1]]: [BB/../../../../../../..][../../../../../../../..]
[rhea4:104150] MCW rank 3 bound to socket 1[core 8[hwt 0-1]]: [../../../../../../../..][BB/../../../../../../..]
$
Thread Layout
Warning: Without controlling affinity, threads may be placed on the same core.
2 MPI tasks, 1 tasks per node, 16 threads per task, 2 nodes
$ setenv OMP_NUM_THREADS 16
$ mpirun -np 2 --map-by ppr:1:node:pe=16 ./a.out
Compute Node 0
Socket 0 Socket 1
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
Task 0, Thread 0 Thread 1 Thread 2 Thread 3 Thread 4 Thread 5 Thread 6 Thread 7 Thread 8 Thread 9 Thread 10 Thread 11 Thread 12 Thread 13 Thread 14 Thread 15
Compute Node 1
Socket 0 Socket 1
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
Task 1, Thread 0 Thread 1 Thread 2 Thread 3 Thread 4 Thread 5 Thread 6 Thread 7 Thread 8 Thread 9 Thread 10 Thread 11 Thread 12 Thread 13 Thread 14 Thread 15
2 MPI tasks, 1 tasks per socket, 4 threads per task, 1 node
$ setenv OMP_NUM_THREADS 4
$ mpirun -np 2 --map-by ppr:1:socket:pe=4 ./a.out
Compute Node
Socket 0 Socket 1
Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7
Task 0, Thread 0 Thread 1 Thread 2 Thread 3 Task 1, Thread 0 Thread 1 Thread 2 Thread 3


8.9.3. Resource Sharing on Commodity Clusters

(Back to Top)

Jobs on OLCF clusters are scheduled in full node increments; a node's cores cannot be allocated to multiple jobs. Because the OLCF charges based on what a job makes unavailable to other users, a job is charged for an entire node even if it uses only one core on a node. To simplify the process, users are given a multiples of entire nodes through PBS.

Note: Users are given a multiples of entire nodes through PBS, and associated allocations are reduced by the number of nodes requested, regardless of actual CPU utilization.


8.9.4. Task-Core Affinity on Commodity Clusters

(Back to Top)

In general, the cluster may move MPI tasks between cores within a node. To help prevent a job’s tasks from being moved between cores each idle cycle the mpi_yield_when_idle OpenMPI option may be used. For example:

  $ mpirun -n 8 -mca mpi_yield_when_idle 0 a.out
This will help prevent the core from being given to other waiting tasks. This only affects MPI processes when they are blocking in MPI library calls. By default OpenMPI will set this variable based on whether it believes the node is over-allocated or under-allocated. If over-allocated, mpi_yield_when_idle, will be set to a value other than (1), allowing the core to be given to other waiting tasks when idle. If under-allocated, mpi_yield_when_idle, will be set to (0). If more tasks are running on a node than are cores, the OS will swap all tasks between cores on the node. The mpi_yield_when_idle option only helps to slow this down; it will not fully prevent the swaps.


8.10. Job Accounting on Rhea

(Back to Top)

Jobs on Rhea are scheduled in full node increments; a node's cores cannot be allocated to multiple jobs. Because the OLCF charges based on what a job makes unavailable to other users, a job is charged for an entire node even if it uses only one core on a node. To simplify the process, users are given a multiples of entire nodes through PBS.

Viewing Allocation Utilization
Projects are allocated time on Rhea in units of node-hours. This is separate from a project's Titan or Eos allocation, and usage of Rhea does not count against that allocation. This page describes how such units are calculated, and how users can access more detailed information on their relevant allocations.
Node-Hour Calculation
The node-hour charge for each batch job will be calculated as follows:
node-hours = nodes requested * ( batch job endtime - batch job starttime )
Where batch job starttime is the time the job moves into a running state, and batch job endtime is the time the job exits a running state. A batch job's usage is calculated solely on requested nodes and the batch job's start and end time. The number of cores actually used within any particular node within the batch job is not used in the calculation. For example, if a job requests (6) nodes through the batch script, runs for (1) hour, uses only (2) CPU cores per node, the job will still be charged for 6 nodes * 1 hour = 6 node-hours.

Viewing Usage
Utilization is calculated daily using batch jobs which complete between 00:00 and 23:59 of the previous day. For example, if a job moves into a run state on Tuesday and completes Wednesday, the job's utilization will be recorded Thursday. Only batch jobs which write an end record are used to calculate utilization. Batch jobs which do not write end records due to system failure or other reasons are not used when calculating utilization. Each user may view usage for projects on which they are members from the command line tool showusage and the My OLCF site.
On the Command Line via showusage
The showusage utility can be used to view your usage from January 01 through midnight of the previous day. For example:
  $ showusage
    Usage:
                             Project Totals                      
    Project             Allocation      Usage      Remaining     Usage
    _________________|______________|___________|____________|______________
    abc123           |  20000       |   126.3   |  19873.7   |   1560.80
The -h option will list more usage details.
On the Web via My OLCF
More detailed metrics may be found on each project's usage section of the My OLCF site. The following information is available for each project:
  • YTD usage by system, subproject, and project member
  • Monthly usage by system, subproject, and project member
  • YTD usage by job size groupings for each system, subproject, and project member
  • Weekly usage by job size groupings for each system, and subproject
  • Batch system priorities by project and subproject
  • Project members
The My OLCF site is provided to aid in the utilization and management of OLCF allocations. If you have any questions or have a request for additional data, please contact the OLCF User Assistance Center.


8.11. Enabling Workflows through Cross-System Batch Submission

(Back to Top)

The OLCF now supports submitting jobs between OLCF systems via batch scripts. This can be useful for automatically triggering analysis and storage of large data sets after a successful simulation job has ended, or for launching a simulation job automatically once the input deck has been retrieved from HPSS and pre-processed.

Cross-Submission allows jobs on one OLCF resource to submit new jobs to other OLCF resources.

Cross-Submission allows jobs on one OLCF resource to submit new jobs to other OLCF resources.

The key to remote job submission is the command qsub -q host script.pbs which will submit the file script.pbs to the batch queue on the specified host. This command can be inserted at the end of an existing batch script in order to automatically trigger work on another OLCF resource. This feature is supported on the following hosts:
Host Remote Submission Command
Rhea qsub -q rhea visualization.pbs
Eos qsub -q eos visualization.pbs
Titan qsub -q titan compute.pbs
Data Transfer Nodes (DTNs) qsub -q dtn retrieve_data.pbs
Example Workflow 1: Automatic Post-Processing
The simplest example of a remote submission workflow would be automatically triggering an analysis task on Rhea at the completion of a compute job on Titan. This workflow would require two batch scripts, one to be submitted on Titan, and a second to be submitted automatically to Rhea. Visually, this workflow may look something like the following:
Post-processing Workflow
The batch scripts for such a workflow could be implemented as follows: Batch-script-1.pbs
#PBS -l walltime=0:30:00
#PBS -l nodes=1
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Retrieve data from HPSS
cd $MEMBERWORK/prj123
htar -xf /proj/prj123/compute_data.htar compute_data/

# Submit compute job to Titan
qsub -q titan Batch-script-2.pbs
Batch-script-2.pbs
#PBS -l walltime=2:00:00
#PBS -l nodes=4096
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Launch exectuable
cd $MEMBERWORK/prj123
aprun -n 65536 ./analysis-task.exe

# Submit data archival job to DTNs
qsub -q dtn Batch-script-3.pbs
Batch-script-3.pbs
#PBS -l walltime=0:30:00
#PBS -l nodes=1
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Launch exectuable
cd $MEMBERWORK/prj123
htar -cf /proj/prj123/viz_output.htar viz_output/
htar -cf /proj/prj123/compute_data.htar compute_data/
The key to this workflow is the qsub -q batch@rhea-batch Batch-script-2.pbs command, which tells qsub to submit the file Batch-script-2.pbs to the batch queue on Rhea.
Initializing the Workflow
We can initialize this workflow in one of two ways:
  • Log into dtn.ccs.ornl.gov and run qsub Batch-script-1.pbs OR
  • From Titan or Rhea, run qsub -q dtn Batch-script-1.pbs
Example Workflow 2: Data Staging, Compute, and Archival
Now we give another example of a linear workflow. This example shows how to use the Data Transfer Nodes (DTNs) to retrieve data from HPSS and stage it to your project's scratch area before beginning. Once the computation is done, we will automatically archive the output.
Post-processing Workflow
Batch-script-1.pbs
#PBS -l walltime=0:30:00
#PBS -l nodes=1
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Retrieve Data from HPSS
cd $MEMBERWORK/prj123
htar -xf /proj/prj123/input_data.htar input_data/

# Launch compute job
qsub -q titan Batch-script-2.pbs
Batch-script-2.pbs
#PBS -l walltime=6:00:00
#PBS -l nodes=4096
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Launch exectuable
cd $MEMBERWORK/prj123
aprun -n 65536 ./analysis-task.exe

# Submit data archival job to DTNs
qsub -q dtn Batch-script-3.pbs
Batch-script-3.pbs
#PBS -l walltime=0:30:00
#PBS -l nodes=1
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Launch exectuable
cd $MEMBERWORK/prj123
htar -cf /proj/prj123/viz_output.htar viz_output/
htar -cf /proj/prj123/compute_data.htar compute_data/
Initializing the Workflow
We can initialize this workflow in one of two ways:
  • Log into dtn.ccs.ornl.gov and run qsub Batch-script-1.pbs OR
  • From Titan or Rhea, run qsub -q dtn Batch-script-1.pbs
Example Workflow 3: Data Staging, Compute, Visualization, and Archival
This is an example of a "branching" workflow. What we will do is first use Rhea to prepare a mesh for our simulation on Titan. We will then launch the compute task on Titan, and once this has completed, our workflow will branch into two separate paths: one to archive the simulation output data, and one to visualize it. After the visualizations have finished, we will transfer them to a remote institution.
Post-processing Workflow
Step-1.prepare-data.pbs
#PBS -l walltime=0:30:00
#PBS -l nodes=10
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Prepare Mesh for Simulation
mpirun -n 160 ./prepare-mesh.exe

# Launch compute job
qsub -q titan Step-2.compute.pbs
Step-2.compute.pbs
#PBS -l walltime=6:00:00
#PBS -l nodes=4096
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Launch exectuable
cd $MEMBERWORK/prj123
aprun -n 65536 ./analysis-task.exe

# Workflow branches at this stage, launching 2 separate jobs

# - Launch Archival task on DTNs
qsub -q dtn@dtn-batch Step-3.archive-compute-data.pbs

# - Launch Visualization task on Rhea
qsub -q rhea Step-4.visualize-compute-data.pbs
Step-3.archive-compute-data.pbs
#PBS -l walltime=0:30:00
#PBS -l nodes=1
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Archive compute data in HPSS
cd $MEMBERWORK/prj123
htar -cf /proj/prj123/compute_data.htar compute_data/
Step-4.visualize-compute-data.pbs
#PBS -l walltime=2:00:00
#PBS -l nodes=64
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Visualize Compute data
cd $MEMBERWORK/prj123
mpirun -n 768 ./visualization-task.py

# Launch transfer task
qsub -q dtn Step-5.transfer-visualizations-to-campus.pbs
Step-5.transfer-visualizations-to-campus.pbs
#PBS -l walltime=2:00:00
#PBS -l nodes=1
#PBS -A PRJ123
#PBS -l gres=atlas1%atlas2

# Transfer visualizations to storage area at home institution
cd $MEMBERWORK/prj123
SOURCE=gsiftp://dtn03.ccs.ornl.gov/$MEMBERWORK/visualization.mpg
DEST=gsiftp://dtn.university-name.edu/userid/visualization.mpg
globus-url-copy -tcp-bs 12M -bs 12M -p 4 $SOURCE $DEST
Initializing the Workflow
We can initialize this workflow in one of two ways:
  • Log into rhea.ccs.ornl.gov and run qsub Step-1.prepare-data.pbs OR
  • From Titan or the DTNs, run qsub -q rhea Step-1.prepare-data.pbs
Checking Job Status
Host Remote qstat Remote showq
Rhea qstat -a @rhea-batch showq --host=rhea-batch
Eos qstat -a @eos-batch showq --host=eos-batch
Titan qstat -a @titan-batch showq --host=titan-batch
Data Transfer Nodes (DTNs) qstat -a @dtn-batch showq --host=dtn-batch
Deleting Remote Jobs
In order to delete a job (say, job number 18688) from a remote queue, you can do the following
Host Remote qdel
Rhea qdel 18688@rhea-batch
Eos qdel 18688@eos-batch
Titan qdel 18688@titan-batch
Data Transfer Nodes (DTNs) qdel 18688@dtn-batch
Potential Pitfalls
The OLCF advises users to keep their remote submission workflows simple, short, and mostly linear. Workflows that contain many layers of branches, or that trigger many jobs at once, may prove difficult to maintain and debug. Workflows that contain loops or recursion (jobs that can submit themselves again) may inadvertently waste allocation hours if a suitable exit condition is not reached.
Recursive workflows which do not exit will drain your project's allocation. Refunds will not be granted. Please be extremely cautious when designing workflows that cause jobs to re-submit themselves.
Circular Workflow
As always, users on multiple projects are strongly advised to double check that the #PBS -A <PROJECTID> field is set to the correct project prior to submission. This will ensure that resource usage is associated with the intended project.