Finance your purchase through HPEFS
- Click on 'Get Quote' to receive a quotation that includes financing provided by HPEFS.
- OR, call HPEFS at +46-8-7509713
Do you require rapid access to shared data between multiple servers within a Linux® high-performance computing (HPC) cluster on a Storage Area Network (SAN)? The HPE Clustered Extents File System is designed to provide simultaneous, high speed shared access to data between clustered Linux servers connected to a SAN, where each server in the cluster has direct high-speed data channels to a shared set of disks. The servers share a single name-space within the cluster, so each server can see all files, and can access files at local to near-local speeds. HPE Clustered Extents File System can scale for bandwidth or I/O by adding additional storage or network connections and provide for high availability (HA) of data within a design that detects and automatically recovers from server or network failure.
Are your HPC and AI compute workloads bottlenecked by slow storage performance? WekaIO Matrix is a high performance, scalable, and parallel file system that is ideal for AI, technical computing, and mixed workloads. The flash-native, highly resilient, POSIX file system delivers the high IOPS and low-latency throughput needed for demanding compute requirements. WekaIO Matrix provides integrated policy-based tiering, so data can span from NVMe flash to object storage in a single namespace for easy management and cost-effective economics. Native support for the industry-standard S3 interface allows integration with both on-premises and cloud environments. Your organization made a significant investment in the compute infrastructure to support your analytics workloads, not allow data accessibility be the bottleneck to the overall productivity of your solution. WekaIO Matrix delivers the performance you need, so your data analysis pipelines will never be stalled waiting for data.
Do you need to improve data management in your HPC and AI Linux® storage environment? The HPE Data Management Framework (DMF) provides more efficient utilization of storage infrastructure, reduced time to insight, and allows for petabyte scale backup and Point-in-Time restoration of data. A new architecture allows for extensible metadata, which allows tagging data with attributes which can be queried to allow simplified creation of data sets. Along with data set labeling, job scheduler integration and the built-in policy engine, data intensive workflows can be automated and streamlined through automatic data set creation, staging of data, and data movement for processing. This automated data management functionality allows efficient utilization of storage infrastructure by removing stale data from defined data tiers and provides a virtual storage space that appears to be unlimited in size. Needed data is automatically retrieved as needed, making storage look "bigger on the inside."