Finance your purchase through HPEFS
- Click on 'Get Quote' to receive a quotation that includes financing provided by HPEFS.
- OR, call HPEFS at 800 8523 141
Do your HPC and AI applications need ever-increasing networking performance to address problems with massive datasets, and complex and highly-parallelized algorithms in an extreme-scale system? The HPE InfiniBand NDR Switches provide increased processing power and TOR density by delivering an impressive 64 ports of 400 Gb/s InfiniBand per port in a standard 1U design. Obtain ultra-fast processing by utilizing NVIDIA® In-Network Computing technologies with SHARPv3 (the 3rd generation of NVIDIA SHARP technology). With NVIDIA port-split technology and support for up to 128 ports of 200 Gb/s, HPE InfiniBand NDR Switches can enable small- to medium-sized deployments to scale with a two-level fat-tree topology while reducing requirements for power, latency, and space.
Is your HPC networking solution able to meet your converged workload needs for today and tomorrow?
HPE Slingshot provides a modern, high-performance interconnect for HPC and AI clusters that delivers high-bandwidth and low-latency for HPC, ML, and analytics applications by bringing together the specialized requirements of HPC-optimized fabrics with the ubiquity of Ethernet. This delivers a converged infrastructure with high-performance on both HPC simulation codes and native IP applications, with efficient scalable access to data sources.
Building on Cray's specialized silicon, HPE Slingshot delivers consistent performance and low latency under load and at scale. This prepares you to efficiently service increasingly diverse users taking advantage of your HPC resources, and do so without overprovisioning bandwidth or deploying multiple systems to avoid congestion on your most demanding workloads.
Does your high performance computing (HPC) data center require high speed fabric infrastructure? The HPE Apollo 100GbE 48-port Intel Omni-Path Architecture Unmanaged Switch is an integrated option in the HPE Apollo 6000 Gen10 System with 24 downlink ports and 24 uplink ports. It cost-effectively supports large HPC clusters and delivers an exceptional set of high-speed connectivity features and functions. It is ideal for customers who deploy HPC clusters based on the HPE Apollo 6000 Gen10 Systems using the Intel® Omni-Path technology.
* Prices may vary based on local reseller.