Skip to main content

The High-Performance Storage System (HPSS) is the archival mass-storage resource at ORNL and consists of robotic tape and disk storage components, Linux servers, and associated software. Incoming data is written to disk and later migrated to tape for long-term archival. As storage, network, and computing technologies continue to change, ORNL’s storage system evolves to take advantage of new equipment that is both more capable and more cost-effective.

HPSS Support

The System User Guide is the definitive source of information about HPSS and details everything from using Globus for data transfers to executing HSI and HPSS Tape Archiver (HTAR) commands. Please direct questions about HPSS and its usage to the OLCF User Assistance Center by emailing [email protected].

System Specifications

Tape

  • Tape library: 19-frame Spectra® Tfinity® ExaScale
  • Cartridge capacity: Total capacity of 13,221 IBM 3592 JD tapes, 11704 slots currently in use
  • Tape storage: 132 PB
  • Model of tape drives: IBM® TS1155
  • Individual drive bandwidth: 360 MBps

Disk

  • Front-end disk cache: 3 DDN SFA14KX storage subsystems, 1 DDN SFA400NV storage subsystem
  • Disk storage: 23.3 PB
  • Bandwidth at the block level: 50 GBps

Access

HPSS can be accessed by one of several dedicated data transfer nodes (DTNs) for Globus or shared DTNs for the hierarchical storage interface (HSI), both using 40-gigabit Ethernet connections.

HPSS History

ORNL’s work in mass storage began in the early 1990s to support the Atmospheric Radiation Measurement project and to provide storage for simulation results generated on the NCCS’s Paragon supercomputers. To support those projects, ORNL acquired and ran the NSL UniTree storage management product.

In 1993, a follow-on to NSL UniTree known as HPSS was being designed by IBM and a collaboration of Department of Energy (DOE) and national laboratories (Sandia, Lawrence Livermore, and Los Alamos National Laboratories). ORNL joined that collaboration and took on the responsibility for the storage system management (SSM) portion of the product, for which the ORNL HPSS development team continues to be responsible.

ORNL continued with NSL UniTree production use until 1997, at which time the conversion to HPSS was completed. In 1997, HPSS won an R&D 100 Award, based on the work of five DOE labs—Oak Ridge, Lawrence Berkeley, Lawrence Livermore, Los Alamos, and Sandia National Laboratories—and IBM Houston Global Services.

In 1998, the OLCF was producing 300 GB of data per month and providing 1 TB of disk storage space. More than 20 years later, the OLCF archives petabytes (PB) of data per month and provides 23.3 PB of disk storage space and 132 PB of tape storage space.

Latest HPSS Highlights

Filter

New Data Transfer Tool Developed at ORNL to Be Made Available for Public Use

Jake Wynne, an HPC storage systems engineer at ORNL, created the hsi_xfer tool to simplify the process of transferring large quantities of data. A new data transfer tool created at the Oak Ridge Leadership Computing Facility could be available to facilities nationwide after making its debut at the Department of…
Angela GosnellAngela GosnellDecember 4, 20245 min

2024 Notable System Changes: Summit and HPSS

HPSS Decommission and Kronos Availability After decades in service and having served hundreds of users that have archived over 160 petabytes, HPSS is reaching end of its life and will be decommissioned early in 2025. Please pay attention to the following key dates as you migrate workloads from the center’s…
Katie BetheaKatie BetheaAugust 22, 20242 min

Compelling Evidence of Neutrino Process Opens Physics Possibilities

SCGSR Awardee Jacob Zettlemoyer, Indiana University Bloomington, led data analysis and worked with ORNL’s Mike Febbraro on coatings, shown under blue light, to shift argon light to visible wavelengths to boost detection. Image Credit: Rex Tayloe/Indiana University The COHERENT particle physics experiment at the Department of Energy’s Oak Ridge National…
Dawn LevyDawn LevyJanuary 26, 20217 min