HPE Message Passing Interface (MPI) is an MPI development environment designed to enable the development and optimization of high performance computing (HPC) applications. It leverages optimized software libraries, runtime tools, and a scalable development environment to help customers tune and accelerate compute-intensive applications running on any HPE Linux-based cluster.
Do you require rapid access to shared data between multiple servers within a Linux® high-performance computing (HPC) cluster on a Storage Area Network (SAN)? The HPE Clustered Extents File System is designed to provide simultaneous, high speed shared access to data between clustered Linux servers connected to a SAN, where each server in the cluster has direct high-speed data channels to a shared set of disks. The servers share a single name-space within the cluster, so each server can see all files, and can access files at local to near-local speeds. HPE Clustered Extents File System can scale for bandwidth or I/O by adding additional storage or network connections and provide for high availability (HA) of data within a design that detects and automatically recovers from server or network failure.
Are your HPC and AI compute workloads bottlenecked by slow storage performance? WekaIO Matrix is a high performance, scalable, and parallel file system that is ideal for AI, technical computing, and mixed workloads. The flash-native, highly resilient, POSIX file system delivers the high IOPS and low-latency throughput needed for demanding compute requirements. WekaIO Matrix provides integrated policy-based tiering, so data can span from NVMe flash to object storage in a single namespace for easy management and cost-effective economics. Native support for the industry-standard S3 interface allows integration with both on-premises and cloud environments. Your organization made a significant investment in the compute infrastructure to support your analytics workloads, not allow data accessibility be the bottleneck to the overall productivity of your solution. WekaIO Matrix delivers the performance you need, so your data analysis pipelines will never be stalled waiting for data.
The HPE Performance Cluster Manager software is a fully integrated system management solution for all HPE high performance computing (HPC) clusters and supercomputers.
The software offers fast system setup from bare-metal, comprehensive hardware monitoring and management, image management and software updates as well as power management and more.
HPE Performance Cluster Manager reduces the time and resources you need to spend administering your systems, keeps them resilient and running as close to maximum efficiency as possible so you can achieve better return on your hardware investments.
Does your organization have NVIDIA GPU Cloud (NGC) ready platforms and need enterprise-level support?
Hewlett Packard Enterprise partners with NVIDIA® to provide NVIDIA NGC Support Services on HPE GPU-enabled systems that are validated as NGC-Ready.
NVIDIA NGC Support Services provide enterprise-grade support enabling NGC-Ready systems to run optimally and provides direct access to NVIDIA customer support to quickly address software issues and helps reduce downtime.
Does your HPC organization need to develop code in-house?
HPE Cray Programming Environment, a fully integrated software suite with compilers, tools, and libraries designed to improve programmer productivity, application scalability, and performance.
Besides support for multiple programming languages, programming models, compilers, I/O libraries, and scientific libraries, the suite offers a variety of supported tools for areas including debugging, performance analysis, workload management, and environment setup.
Simplify porting of existing applications with minimal recording and changes to the existing programming models, making transition to the new hardware architectures and configurations easier.
The solution aims to enhance developer experience by offering a whole system view, rather than just processor-specific tools. It offers developers intuitive behavior and enhanced performance for their applications with the least amount of effort.
Does your HPC/AI environment need virtual GPU (vGPU) capability?
The NVIDIA® Virtual GPU (vGPU) and Virtual Compute Server (vCS) Software enables the NVIDIA GPU to be virtualized to accelerate compute-intensive server workloads such as AI, deep learning (DL), machine learning, and HPC. Grid provides graphics at scale across the enterprise with GPUs for exceptional productivity, security, and IT manageability by delivering powerful virtual experience from the data center or cloud to any device. vCS provides accelerated GPU virtualization and GPU sharing/segmentation for multiple virtual machines (VMs) by a single GPU, maximizing allocation for AI/DL intensive workloads. vCS provides bare metal performance with operational savings, costs, and improved manageability of VMs. By making GPU performance possible for every virtual machine (VM), vGPU technology enables users to work more efficiently and productively from the data center to the cloud.
Are you a scientist, researcher, or an HPC programmer at a national laboratory, research university, or a commercial organization who develops modeling and simulation applications in weather forecasting, climate modeling, high-energy physics, materials science, computational chemistry, computational biology, computational fluid dynamics, structural analysis, astrophysics, geophysical sciences, and similar fields?
Hewlett Packard Enterprise partners with NVIDIA to provide compiler support for those who utilize the NVIDA HPC Software Development Kit. HPC SDK Compiler Support Services (HCSS) provide enterprise-grade support for the HPC compilers within the NVIDIA HPC SDK, specifically the NVFORTRAN, NVC++, and NVC compilers.
Max 4 items can be added for comparison.
Find what you are looking for?