Spider – the Center-Wide Lustre® File System
Print this article
The OLCF’s center-wide Lustre® file system, called Spider, is the operational work file system for most OLCF computational resources. As an extremely high-performance system, Spider has over 26,000 clients, providing 32 petabytes of disk space and can move data at more than 1 TB/s.
Spider is Center-Wide
Spider is currently accessible nearly all of the OLCF’s computational resources, including Titan and its 18,000+ compute nodes. The file system is available from the following OLCF systems:
- Data Transfer Nodes
Spider is for Temporary Storage
Spider provides a location to temporarily store large amounts of data needed and produced by batch jobs. Due to the size of the file system, the area is not backed up. In most cases, a regularly running purge removes data not recently accessed to help ensure available space for all users. Needed data should be copied to more permanent locations.
Spider Comprises Multiple File Systems
Spider comprises (2) file systems:
Why two filesystems?
There are a few reasons why having multiple file systems within Spider is advantageous.
More Metadata Servers – Currently each Lustre filesystem can only utilize one Metadata Server (MDS). Interaction with the MDS is expensive; heavy MDS access will impact interactive performance. Providing (2) filesystems allows the load to be spread over (2) MDSs.
Higher Availability – The existence of multiple filesystems increases our ability to keep at least one filesystem available at all times.
Associating a Batch Job with a File System
Through the PBS
gres option, users can specify the scratch area used by their batch jobs so that the job will not start if that file system becomes degraded or unavailable.
Creating a Dependency on a Single File System
Line (5) in the following example will associate a batch job with the atlas2 file system. If atlas2 becomes unavailable prior to execution of the batch job, the job will be placed on hold until atlas2 returns to service.
1: #!/bin/csh 2: #PBS -A ABC123 3: #PBS -l nodes = 16000 4: #PBS -l walltime = 08:00:00 5: #PBS -l gres=atlas2 6: 7: cd $MEMBERWORK/abc123 8: aprun -n 256000 ./a.out
Creating a Dependency on Multiple File Systems
The following example will associate a batch job with the atlas1 and atlas2 file systems. If either atlas1 or atlas2 becomes unavailable prior to execution of the batch job, the job will be placed on hold until both atlas1 and atlas2 are in service.
Default is Dependency on All Spider File Systems
If a batch job is not associated with a file system, i.e., if the
gres option is not used, the batch job will be associated with all four widow file systems by adding
-l gres=atlas1%atlas2 to to the batch submission.
gresoption will be given a dependency on all Spider file systems.
Why Explicitly Associate a Batch Job with a File System?
- Associating a batch job with a file system will prevent the job from running if the file system becomes degraded or unavailable.
- If a batch job only uses (1) spider file systems, specifying the file systems explicitly instead of taking the default of all (2), would prevent the job from being held if a file system not used by the job becomes degraded or unavailable.
Verifying/View Batch Job File System Association
checkjob utility can be used to view a batch job’s file system associations. For example:
$ qsub -lgres=atlas2 batchscript.pbs 851694.nid00004 $ checkjob 851694 | grep "Dedicated Resources Per Task:" Dedicated Resources Per Task: atlas2: 1
Available Directories on Spider
A temporary User Work scratch directory is available for each user at (in each Spider file system) at
By default, User Work directories are owned by the user, and the group is set to the owning user’s userid-named-group. Permissions are set to 700. Changes to the default permissions by the owning user will be reset hourly for security purposes. Long-term changes to the directory permissions can be requested by contacting OLCF User Assistance Center.
Files in the user scratch directories are subject to the standard purge.
A default file system has been chosen for each user. The
/tmp/work/$USER link can be used to access the default directory. A user’s default file system was chosen based on the user’s initial project membership. For example, all users whose initial project membership is climate-centric will be placed on the same file system. All climate-centric Project Work areas are also placed on the same file system. Using the default file system helps spread load over the file systems as well as helps to ease data sharing/access between project members.
/tmp/work/$USERlink points to each user’s default scratch directory. Using the default file system is recommended as it helps spread load over all file systems.
A temporary Project Work directory is available for each project on (1) of the (4) Spider file systems. The directory can be accessed through the
By default, Project Work directories are owned by the root, the group is set to the project’s group, and permissions are set to 770. Changes to the directory permissions can be requested by contacting OLCF User Assistance Center.
Files in the Project Work directories are not currently subject to the standard purge. However, this is subject to change and users should always consider Spider temporary storage.
How do I Determine the Default File System for my User Work/Project Work directory?
ls – The following
ls command can be used to determine where a link points. The target location’s path will specify the file system on which the directory exists:
ls -ld /tmp/work/$USER ls -ld /tmp/proj/
spiderinfo – The
spiderinfo utility will list each file system’s status as well as the calling user’s
/tmp/proj file systems:
$ spiderinfo Current lustre status (Tue Jan 25 14:32:27 2011): widow1 (up), widow2 (up), widow3 (up) Lustre directory information for user 'joe' /tmp/work/joe: widow2 (up) /tmp/proj/abc123: widow2 (up)
Current Configuration of Spider
|Total disk space||2.5 PB||2.5 PB||2.5 PB|
|Number of OSTs||336||336||336|
|Default stripe count||4||4||4|
|Default stripe size||1 MB||1 MB||1 MB|