High Performance Storage System

HPSS Logo
Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    HPSS Version 8.2.0

The latest version of HPSS introduces a few new features to the software. The following is an overview of three new features that can be found in HPSS 8.2.0.

New to HPSS 8.2.0

Rumbler API Support

HPSS provides an API to allow unified storage namespace (USN) applications to incorporate snapshots of the HPSS namespace.

USN applications provide a higher-level view of storage in data centers, including parallel file systems, project space, community and campaign stores, and archive namespaces. Such applications provide high-speed search capabilities and additionally allow annotation of data sets with custom metadata for multiple purposes including, but not limited to:

  • Assign provenance e.g. Digital Object Identifiers (DOI)
  • Describe access frequency (hot, warm, cold) to drive data migration policies
  • Classify data with arbitrary tags to assist with workflow automation
  • Calculate/apply hashes for purposes of data integrity checking, de-duplication, etc.

Parallel FTP Update: Support for MLST

MLST functionality is added to PFTP. MLST provides data about exactly the object named on its command line and no others.

Parallel FTP Update: Support Checksum

The PFTP client and server now support setting and getting HPSS file hash information. This information is the E2EDI file hash metadata stored along with the bitfile - this is not the legacy User-defined Attributes-based file hash.

< Home

Come meet with us!
HPSS @ MSST 2020
The 35th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2020 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

HPSS @ ISC20
The 2020 international conference for high performance computing, networking, and storage will be in Frankfurt, Germany from June 21st through 25th, 2020 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to meet with the IBM business and technical leaders of HPSS in Frankfurt.

2020 HUF
The 2020 HPSS User Forum (HUF) is being hosted by Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany from September 7th through September 11th, 2020. This is a great place to meet HPSS users, collaboration developers and testers (from IBM and DOE Labs), support folks, and leadership. More details coming soon.

HPSS @ SC20
The 2020 international conference for high performance computing, networking, storage and analysis will be in Atlanta, Georgia from November 16th through 19th, 2020 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to meet with the IBM business and technical leaders of HPSS in Atlanta.

What's New?
HPSS 8.2 Release - HPSS 8.2 was released on December 6th, 2019 and introduces a few new features.

New Globus DSI - Version 2.9 of the HPSS DSI is now available from the GitHub release page. It provides the capability to resume interrupted Globus transfers.

Lots Of Data - In November 2019 IBM/HPSS delivered a system to Shared Services Canada (SSC) for Environment Canada and demonstrated a sustained tape ingest rate of 11,574 MB/sec (1 PB/day peak tape ingest) while simultaneously demonstrating a sustained tape recall rate of 8,832 MB/sec (791 TB/day peak tape recall). HPSS pushed four 13-frame IBM TS4500 tape libraries (scheduled to house over 500 PB of tape media) to 2,168 mounts/hour.

HPSS 8.1 Release - HPSS 8.1 was released on October 1st, 2019 and introduces a few new features.

July 2019 - Argonne Team Breaks Record for Globus Data Movement from the Summit supercomputer at Oak Ridge National Laboratory to HPSS tape.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with 451 PB spanning 312 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with 57 PB spanning 1.414 billion files.

Explosive data growth - HPSS Collaboration leadership from Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) helped author the "NERSC Storage 2020" report, and NERSC trusts HPSS to meet their immediate and long term data storage challenges.

Older News - Want to read more?
Home    |    About HPSS    |    Services    |    Contact us
Copyright 1992 - 2020, HPSS Collaboration. All Rights Reserved.