Filesystems Available to Compute Nodes
Categories: File Systems
Print this article
The compute nodes do not mount all filesystems available from the login and service nodes. The Lustre® areas (
$PROJWORK) as well as the /ccs/proj areas are available to compute nodes on OLCF Cray systems. User home directories are not mounted on compute nodes.
/ccs/home/$USERare not available from the compute nodes.
As a result, job executable binaries and job input files must reside within a Lustre or /ccs/proj work space., e.g.
Overview of filesystems available to compute nodes
|Read/Write||Batch job Input and Output|
|NFS||/ccs/proj||Read-only||Binaries, Shared Object Libraries, Python Scripts|
Because the /ccs/proj areas are backed-up daily and are accessible by all members of a project, the areas are very useful for sharing code with other project members. Due to the Meta-data overhead, the NFS areas are preferred storage locations for shared object libraries and python modules.
The Lustre $MEMBERWORK, $PROJWORK, and $WORLDWORK areas are much larger than the NFS areas and are configured for large data I/O.
/ccs/proj Update Delay and Read-Only Access
The /ccs/proj areas are mounted read-only on the compute nodes. The areas are mounted read/write on the login and service nodes, but it may take up-to 30 minutes to propagate changes to the compute nodes. Because the /ccs/proj areas are mounted read-only on the compute nodes, job output must be sent to a Lustre work space.
Home Directory Access Error
Batch jobs can be submitted from User Home, but additional steps are required to ensure the job runs successfully. Jobs submitted from Home areas should
cd into a Lustre work directory prior to invoking
aprun. An error like the following may be returned if this is not done:
aprun: [NID 94]Exec /lustre/atlas/scratch/userid/a.out failed: chdir /autofs/na1_home/userid No such file or directory