The OLCF’s center-wide IBM Spectrum Scale parallel file system, called Alpine, went into production in November of 2018 as the global scratch file system for the OLCF’s HPC computational resources, including Summit, Andes, Slate, and the data transfer nodes. As an extremely high-performance system, Alpine has tens of thousands of clients, provides 250 PB of disk space, and can move data at a rate of 2.5 TB per second. It features 77 IBM Elastic Storage Server (ESS) GL4 building blocks that utilize General Parallel File System Native RAID to provide a performant and resilient declustered RAID schema.

Specifications and Features

  • System: 77 IBM ESS 5146-GL4 nodes
  • Disk drives: 32,494 10 TB NL-SAS drives
  • Storage capacity: 250 PB
  • Peak sequential performance: 2.5 TB/s
  • Peak random performance: 2.2 TB/s
  • File system software: IBM Spectrum Scale 5.0.x


Alpine is available from Summit, Andes, Slate, and the Peak test system, or from one of several dedicated data transfer nodes (DTNs) or EVEREST nodes.

System Support

The System User Guide is the definitive source of information about Alpine and details everything from the storage areas to tips for best performance. Please direct questions about Alpine and its usage to the OLCF User Assistance Center by emailing

Latest Alpine Articles