HPSS uses raw SCSI block storage devices for disk storage. HPSS movers manage disk transfers, and are deployed with SAN attached, direct attached, or on-board solid state disk and spinning disk technologies. Scale HPSS disk storage capacity and disk storage bandwidth by deploying more mover computers, and more storage hardware.
Our software delivery model makes HPSS disk storage economical at scale. Furthermore, HPSS disk transfers scale to saturate modern storage units for extreme-scale disk transfers.
HPSS Disk Storage
Economical at Scale
Annual licensing and cost of HPSS software is based solely on IBM Consulting service fees. The licensing model for HPSS is not related to data volume or hardware (number of processors, cores, nodes, etc.) There are no configuration-based attributes (e.g., number of files, capacity managed, number of server cores) that impact the cost of HPSS. HPSS software is highly scalable. Customers are encouraged to evolve their system over time to accommodate growth forecasts and new requirements while the cost for HPSS remains independent of those changes.
HPSS software for 10 petabytes of disk or 100 petabyte of disk is the same. Thus, HPSS is incredibly economical at scale compared to storage software that is priced and licensed by capacity.

High-Performance Transfers
HPSS delivers high-performance disk transfers. Per-stream transfer performance is maximized by leveraging multiple threads for parallel transfer.
Modern storage units spread Distributed RAID (D-RAID) devices across a pool of disks to maximize data transfer performance and minimize RAID-rebuild time. Single-threaded transfers are incapable of saturating the available bandwidth of these modern D-RAID devices.
Notice that a single-threaded transfer is only capable of delivering 1,305 MB/s (in this example), and HPSS must be configured to automatically use 8-threads to fully saturate the bandwidth of this 4,000 MB/s device.
When organizations purchase a 20 GB/s storage unit, they are expecting to see 20 GB/s data-transfers, and this is only possible using a multi-threaded parallel transfer.