The table below summarizes the multi-platform capability of HPSS User Interface Clients that are designed to optimize performance.
|Platform||Parallel FTP1||Client API1||PIO2 API||VFS3||FTP Clients4|
|IBM AIX 6||X||X||X||X|
|Oracle Solaris 10 & 11||X||X||X||X|
|RHEL 5 & 6 (x86)||X||X||X||X||X|
|RHEL 5 & 6 (PowerPC)||X||X||X||X||X|
1. HPSS User Interface Client support for operating systems not listed in the table above may be provided by special bid.
2. The PIO API requires the Client API.
3. VFS servers available on Red Hat Enterprise Linux 32 or 64 bit kernels.
4. GUI- based clients may not function correctly for some commands.
|2017 HUF - The 2017 HPSS User Forum will be hosted by the high energy accelerator research organization Kō Enerugī Kasoki Kenkyū Kikō, known as KEK, in Tsukuba, Japan from October 16th through October 20th, 2017.|
|HPSS @ SC16 - SC16 is the 2016 international conference for high performance computing, networking, storage and analysis. SC16 will be in Salt Lake City, Utah from November 14th through 17th - Learn More. Come visit the HPSS folks at the IBM booth and schedule an HPSS briefing at the IBM Executive Briefing Center - Learn More|
|2016 HUF - The 2016 HPSS User Forum will be hosted by Brookhaven National Laboratory in New York City, New York from August 29th through September 2nd - For more information.|
|HPSS @ ISC16 - ISC16 is the 2016 International Supercomputing Conference for high performance computing, networking, storage and analysis. ISC16 will be in Frankfurt, Germany, from June 20th through 22nd - Learn More. Come visit the HPSS folks at the IBM booth and schedule an HPSS briefing at the IBM Executive Briefing Center - Learn More.|
|Swift On HPSS - Leverage OpenStack Swift to provide an object interface to data in HPSS. Directories of files and containers of objects can be accessed and shared across ALL interfaces with this OpenStack Swift Object Server implementation - Contact Us for mor information, or Download Now.|
|Capacity Leader - ECMWF (European Center for Medium-Range Weather Forecasts) has a single HPSS namespace with 216 PB spanning 257 million files.|
|File-Count Leader - LLNL (Lawrence Livermore National Laboratory) has a single HPSS namespace with 62 PB spanning 940 million files.|
|ORNL - Oak Ridge National Laboratory cut redundant tape cost-estimates by 75% with 4+P HPSS RAIT and enjoys large file tape transfers reaching 872 MB/s.|