Spider – the Center-Wide Lustre® File System
Print this article
The OLCF’s center-wide Lustre® file system, called Spider, is the operational work file system for most OLCF computational resources. As an extremely high-performance system, Spider has over 26,000 clients, providing 32 petabytes of disk space and can move data at more than 1 TB/s.
Spider is Center-Wide
Spider is currently accessible nearly all of the OLCF’s computational resources, including Titan and its 18,000+ compute nodes. The file system is available from the following OLCF systems:
- Data Transfer Nodes
Spider is for Temporary Storage
Spider provides a location to temporarily store large amounts of data needed and produced by batch jobs. Due to the size of the file system, the area is not backed up. In most cases, a regularly running purge removes data not recently accessed to help ensure available space for all users. Needed data should be copied to more permanent locations.
Spider Comprises Multiple File Systems
Spider comprises (2) file systems:
Why two filesystems?
There are a few reasons why having multiple file systems within Spider is advantageous.
More Metadata Servers – Currently each Lustre filesystem can only utilize one Metadata Server (MDS). Interaction with the MDS is expensive; heavy MDS access will impact interactive performance. Providing (2) filesystems allows the load to be spread over (2) MDSs.
Higher Availability – The existence of multiple filesystems increases our ability to keep at least one filesystem available at all times.
Associating a Batch Job with a File System
Through the PBS
gres option, users can specify the scratch area used by their batch jobs so that the job will not start if that file system becomes degraded or unavailable.
Creating a Dependency on a Single File System
Line (5) in the following example will associate a batch job with the atlas2 file system. If atlas2 becomes unavailable prior to execution of the batch job, the job will be placed on hold until atlas2 returns to service.
1: #!/bin/csh 2: #PBS -A ABC123 3: #PBS -l nodes = 16000 4: #PBS -l walltime = 08:00:00 5: #PBS -l gres=atlas2 6: 7: cd $MEMBERWORK/abc123 8: aprun -n 256000 ./a.out
Creating a Dependency on Multiple File Systems
The following example will associate a batch job with the atlas1 and atlas2 file systems. If either atlas1 or atlas2 becomes unavailable prior to execution of the batch job, the job will be placed on hold until both atlas1 and atlas2 are in service.
Default is Dependency on All Spider File Systems
gres option is not used, batch jobs will be associated with all Spider filesystems by default as though the option
-l gres=atlas1%atlas2 had been applied to the batch submission.
gresoption will be given a dependency on all Spider file systems.
Why Explicitly Associate a Batch Job with a File System?
- Associating a batch job with a file system will prevent the job from running if the file system becomes degraded or unavailable.
- If a batch job only uses (1) spider file systems, specifying the file systems explicitly instead of taking the default of all (2), would prevent the job from being held if a file system not used by the job becomes degraded or unavailable.
Verifying/View Batch Job File System Association
checkjob utility can be used to view a batch job’s file system associations. For example:
$ qsub -lgres=atlas2 batchscript.pbs 851694.nid00004 $ checkjob 851694 | grep "Dedicated Resources Per Task:" Dedicated Resources Per Task: atlas2: 1
Available Directories on Spider
Every project is assigned a directory in the Spider filesystem, and all storage in the Spider filesystem is therefore associated with a project. This directory is further divided into user-specific areas, an area shared among all members of the project, and an area shared among all users of the system. For more details, see the article on Project-Centric Work Directories.
Current Configuration of Spider
|Total disk space||14 PB (usable)||14 PB (usable)|
|Number of OSTs||1008||1008|
|Default stripe count||4||4|
|Default stripe size||1 MB||1 MB|