High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Tape Library Efficiency
HPSS Tape Library Efficiency on Versions 7.5.2 and 7.5.3

HPSS versions 7.5.2 and 7.5.3 include new features that directly improve tape library mount rates across multiple tape library types. Improved tape library efficiency reduces the access latency for data on tape. In these versions, HPSS streamlines the way cartridge move requests are processed to maximize efficiency.

7.5.2 Tape Library Efficiency Improvements

Tape improvements have been made to HPSS version 7.5.2 which significantly raise the mount rate efficiency of tape libraries. SCSI PVR improvements that create this increase in efficiency include:

  • Expediting command processing by minimizing communications with the library before sending each command.
  • Further reducing individual move request overhead by minimizing waits during retries
  • Allowing a single control path to queue up to 16 commands by implementing SCSI command queuing
  • Implementing a move scheduler to interlace tape mounts and tape dismounts so the robot is always holding a tape when moving

These changes to HPSS version 7.5.2 drive HPSS efficiency with IBM TS4500 tape libraries from 78% on version 7.5.1 to 99%. The mount rate on 7.5.1 was 702 mounts per hour; it has risen to 894 mounts per hour on 7.5.2.

HPSS 7.5.2 almost doubles the mount rate efficiency with Spectra Logic Libraries from the efficiency seen on 7.5.1. Mount rate efficiency has grown from 42% to 80% on 7.5.2.* The ability to submit multiple move requests with HPSS 7.5.2 has allowed HPSS to support Spectra Logic library's capability to efficiently order multiple requests.

The following graph illustrates the increased mount rate performance with HPSS 7.5.2:

7.5.3 Tape Library Efficiency Improvements

HPSS 7.5.3 improves tape library efficiency further. These improvements include the following new features:

  • Detection of Spectra Logic library zones so that both robot arms are kept busy.
  • Detection of Spectra Logic TeraPack boundaries in the robot and grouping moves by TeraPack so that multiple cartridges can be mounted and dismounted with each TeraPack move.
The combination of these features allows HPSS 7.5.3 to drive mount rate performance to the hardware rate.

On version 7.5.3, HPSS efficiency with the IBM TS4500 has been maintained at 99% (from version 7.5.2). With Spectra Logic tape libraries, mount rate efficiency has been improved to 99% from 80% on 7.5.2.*

The following graph illustrates the mount rate performance with HPSS 7.5.3:

Future HPSS Versions

In future versions of HPSS, maintaining high mount rate efficiency is a priority. The HPSS roadmap is full of library performance improvements. The HPSS goal is to sustain 99% library efficiency with both IBM TS4500 and Spectra Logic libraries as improvements arrive.


*Please contact Spectra Logic about tape library mount rate expectations.

< Home

Come meet with us!
HPSS @ STS 2020
The 2nd Annual Storage Technology Showcase will be in Albuquerque, New Mexico from March 2nd through March 5th, 2020 - Learn More. Please contact us if you would like to schedule a meeting in Albuquerque.

HPSS @ 2020 IBM FoT/NA
The first IBM Future of Tape / North America conference on April 21st and April 22nd, 2020 in Tucson, AZ - Learn More. Please contact us if you plan to addtend and would like to schedule a meeting in Tucson.

HPSS @ MSST 2020
The 36th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California from May 4th through May 8th, 2020 - Learn More. Please contact us if you would like to meet with us in Santa Clara.

HPSS @ ISC20
The 2020 international conference for high performance computing, networking, and storage will be in Frankfurt, Germany from June 21st through 25th, 2020 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to schedule a face-to-face meeting with us in Frankfurt.

2020 HUF
The 2020 HPSS User Forum (HUF) is being hosted by Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany from September 7th through September 10th, 2020. This is a great place to meet HPSS users, collaboration developers, testrs, support folks and leadership (from IBM and DOE Labs) - Learn More. Please contact us if you are not a customer but would like to attend.

HPSS @ SC20
The 2020 international conference for high performance computing, networking, storage and analysis will be in Atlanta, Georgia from November 16th through 19th, 2020 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to meet with the IBM business and technical leaders of HPSS in Atlanta.

What's New?
HPSS 8.2 Release - HPSS 8.2 was released on December 6th, 2019 and introduces a few new features.

New Globus DSI - Version 2.9 of the HPSS DSI is now available from the GitHub release page. It provides the capability to resume interrupted Globus transfers.

Lots Of Data - In November 2019 IBM/HPSS delivered a system to Shared Services Canada (SSC) for Environment Canada and demonstrated a sustained tape ingest rate of 11,574 MB/sec (1 PB/day peak tape ingest) while simultaneously demonstrating a sustained tape recall rate of 8,832 MB/sec (791 TB/day peak tape recall). HPSS pushed four 13-frame IBM TS4500 tape libraries (scheduled to house over 500 PB of tape media) to 2,168 mounts/hour.

HPSS 8.1 Release - HPSS 8.1 was released on October 1st, 2019 and introduces a few new features.

July 2019 - Argonne Team Breaks Record for Globus Data Movement from the Summit supercomputer at Oak Ridge National Laboratory to HPSS tape.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with 451 PB spanning 312 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with 57 PB spanning 1.414 billion files.

Explosive data growth - HPSS Collaboration leadership from Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) helped author the "NERSC Storage 2020" report, and NERSC trusts HPSS to meet their immediate and long term data storage challenges.

Older News - Want to read more?
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2020, HPSS Collaboration. All Rights Reserved.