titan

Up since 11/8/17 02:45 pm

eos

Up since 11/14/17 11:20 pm

rhea

Up since 10/17/17 05:40 pm

hpss

Up since 11/20/17 09:15 am

atlas1

Up since 11/15/17 07:25 am

atlas2

Up since 11/27/17 10:45 am
OLCF User Assistance Center

Can't find the information you need below? Need advice from a real person? We're here to help.

OLCF support consultants are available to respond to your emails and phone calls from 9:00 a.m. to 5:00 p.m. EST, Monday through Friday, exclusive of holidays. Emails received outside of regular support hours will be addressed the next business day.

Filesystems Available to Compute Nodes

See this article in context within the following user guides: Titan

The compute nodes do not mount all filesystems available from the login and service nodes. The Lustre® areas ($MEMBERWORK and $PROJWORK) as well as the /ccs/proj areas are available to compute nodes on OLCF Cray systems. User home directories are not mounted on compute nodes.

Warning: Home directories, /ccs/home/$USER are not available from the compute nodes.
As a result, job executable binaries and job input files must reside within a Lustre or /ccs/proj work space., e.g. $MEMBERWORK/[projid].

Overview of filesystems available to compute nodes

Type Access Mount Suggested Use
Lustre $MEMBERWORK,
$PROJWORK,
$WORLDWORK
Read/Write Batch job Input and Output
NFS /ccs/proj Read-only Binaries, Shared Object Libraries, Python Scripts
Notice: Due to the Meta-data overhead, the NFS areas are preferred storage locations for shared object libraries and python scripts.

Shared Object and Python Scripts

Because the /ccs/proj areas are backed-up daily and are accessible by all members of a project, the areas are very useful for sharing code with other project members. Due to the Meta-data overhead, the NFS areas are preferred storage locations for shared object libraries and python modules.

The Lustre $MEMBERWORK, $PROJWORK, and $WORLDWORK areas are much larger than the NFS areas and are configured for large data I/O.

/ccs/proj Update Delay and Read-Only Access

The /ccs/proj areas are mounted read-only on the compute nodes. The areas are mounted read/write on the login and service nodes, but it may take up-to 30 minutes to propagate changes to the compute nodes. Because the /ccs/proj areas are mounted read-only on the compute nodes, job output must be sent to a Lustre work space.

Home Directory Access Error

Batch jobs can be submitted from User Home, but additional steps are required to ensure the job runs successfully. Jobs submitted from Home areas should cd into a Lustre work directory prior to invoking aprun. An error like the following may be returned if this is not done:

aprun: [NID 94]Exec /lustre/atlas/scratch/userid/a.out failed: chdir /autofs/na1_home/userid
No such file or directory