High Performance Storage System

Incremental Scalability
Based on storage needs and deployment schedules, HPSS scales incrementally by adding computer, network and storage resources. A single HPSS namespace can scale from petabytes of data to exabytes of data, from millions of files to billions of files, and from a few file-creates per second to thousands of file-creates per second.
About HPSS   :    Publicly disclosed HPSS deployments   :   Table   or    Map

These are the publicly disclosed HPSS deployments that have accumulated a petabyte or more of data. Each entry represents a single instance of HPSS; the capacity numbers for a single HPSS namespace.

Sites - 2Q2019 Petabytes M Files Since
(ECMWF) European Centre for Medium-Range Weather Forecasts 437.49 392.41 2002
(UKMO) United Kingdom Met Office 319.96 426.12 2009
(NOAA-RD) National Oceanic and Atmospheric Administration Research & Development 171.97 105.00 2002
(LBNL-User) Lawrence Berkeley National Laboratory - User 168.46 230.35 1998
(BNL) Brookhaven National Laboratory 162.36 179.33 1998
(SSC) Shared Services Canada 154.20 22.27 2017
(Meteo-France) Meteo France - French Weather and Climate 152.15 487.04 2015
(CEA TERA) Commissariat a l`Energie Atomique - Tera Project 132.08 25.43 1999
(MPCDF) Max Planck Computing and Data Facility 123.35 256.50 2011
(DKRZ) Deutsches Klimarechenzentrum 107.96 22.40 2009
(ANL) Argonne National Laboratory 95.00 440.84 2008
(NCAR) National Center for Atmospheric Research 91.99 281.63 2011
(ORNL) Oak Ridge National Laboratory 87.97 92.87 1997
(LLNL-Secure) Lawrence Livermore National Laboratory - Secure 83.94 1239.03 1998
(LANL-Secure) Los Alamos National Laboratory - Secure 74.15 838.40 1997
(IN2P3) Institut National de Physique Nucleaire et de Physique des Particules 69.81 78.27 1999
(IU) Indiana University 69.52 268.52 1999
(DWD) Deutscher Wetterdienst 63.25 49.80 2011
(CEA TGCC) Commissariat a l`Energie Atomique - Très Grand Centre de calcul 60.26 28.35 2010
(LLNL-Open) Lawrence Livermore National Laboratory - Open 53.96 1372.90 1997
(SLAC) Stanford Linear Accelerator Center 45.38 14.44 1999
(HLRS) High Performance Computing Center Stuttgart 42.74 11.50 2008
(NCSA) National Center for Supercomputing Applications 39.49 230.29 2012
(JAXA) Japan Aerospace Exploration Agency 34.32 136.41 2009
(Purdue) Purdue University 32.79 224.94 2011
(LBNL-Backup) Lawrence Berkeley National Laboratory - Backup 25.85 19.84 1997
(KEK) High Energy Accelerator Research Organization 23.07 79.20 2001
(SciNet) SciNet HPC Consortium 16.50 46.77 2011
(LaRC) NASA Langley Research Center 14.89 53.07 1998
(CNES) Centre National d'Etudes Spatiales 14.60 16.59 2015
(PNNL) Pacific Northwest National Laboratory 13.00 123.53 2010
(NOAA-C Boulder) National Oceanic and Atmospheric Administration - Comprehensive Large Array-data Stewardship System - Boulder 12.78 465.72 2014
(NOAA-C Asheville) National Oceanic and Atmospheric Administration - Comprehensive Large Array-data Stewardship System - Asheville 12.61 465.73 2014
(ASDC) NASA Atmospheric Science Data Center 9.12 43.70 2018
(KIT) Karlsruhe Institute of Technology 8.43 90.22 2015
(LANL-Open) Los Alamos National Laboratory - Open 8.27 145.53 1997
(NCEI) National Center for Environmental Information 4.64 101.31 1998
(SNL-Secure) Sandia National Laboratory - Secure 3.90 55.01 1996
(SNL-Open) Sandia National Laboratory - Open 3.65 77.71 1996

< Home

Come meet with us!
2019 HUF
The 2019 HPSS User Forum (HUF) is being hosted by Indiana University (IU) in Bloomington, Indiana from October 15th through October 18th, 2019 - Learn More. Early bird registration runs through August 21st, 2019. Considering HPSS? This is a great place to meet HPSS users, collaboration developers and testers (from IBM and DOE Labs), support folks, and leadership.

The 2019 international conference for high performance computing, networking, storage and analysis will be in Denver, Colorado from November 18th through 21st, 2019 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to meet with the IBM business and technical leaders of HPSS in Denver.

HPSS @ MSST 2020
The 35th International Conference on Massive Storage Systems and Technology will be in Santa Clara, California in May of 2020 - Learn More. Please contact us if you would like to meet with the IBM business and technical leaders of HPSS at Santa Clara University.

The 2020 international conference for high performance computing, networking, and storage will be in Frankfurt, Germany from June 21st through 25th, 2020 - Learn More. Come visit the HPSS folks at the IBM booth and contact us if you would like to meet with the IBM business and technical leaders of HPSS in Frankfurt.

What's New?
July 2019 - Argonne Team Breaks Record for Globus Data Movement from the Summit supercomputer at Oak Ridge National Laboratory to HPSS tape.

Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with 437 PB spanning 392 million files.

File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with 69 PB spanning 1.372 billion files.

HPSS 7.5.3 Release - HPSS 7.5.3 was released in December 2018 and introduces many new and exciting features.

HPSS Training - IBM Houston is looking at hosting an HPSS System Administration Course from October 1st through October 4th, 2019. Are you interested in attending? Learn more.

IBM TS1160 - On November 20, 2018 IBM announced the new enterprise tape technology supporting 20 TB of native capacity and 400 MB/s of native bandwidth. Learn more.

Best of Breed for Tape - HPSS 7.5.2 and 7.5.3 improvements raise HPSS tape library efficiency to 99% on both IBM and Spectra Logic tape libraries.

Explosive data growth - HPSS Collaboration leadership from Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) helped author the "NERSC Storage 2020" report, and NERSC trusts HPSS to meet their immediate and long term data storage challenges.

HPSS Vendor Partnership Grows - HPSS begins Quantum Scalar i6000 tape library testing in 2018. Other HPSS tape vendor partners include IBM, Oracle, and Spectra Logic.

Swift On HPSS - Leverage OpenStack Swift to provide an object interface to data in HPSS. Directories of files and containers of objects can be accessed and shared across ALL interfaces with this OpenStack Swift Object Server implementation - Contact Us for more information, or Download Now.

RAIT - Oak Ridge National Laboratory cut redundant tape cost-estimates by 75% with 4+P HPSS RAIT (tape stripe with rotating parity) and enjoy large file tape transfers beyond 1 GB/s.
Home    |    About HPSS    |    Services    |    Contact us
Copyright 2018, HPSS Collaboration. All Rights Reserved.