Finance your purchase through HPEFS
- Click on 'Get Quote' to receive a quotation that includes financing provided by HPEFS.
- OR, call HPEFS at +47-22-577706
Is your HPC networking solution able to meet your converged workload needs for today and tomorrow?
HPE Slingshot provides a modern, high-performance interconnect for HPC and AI clusters that delivers high-bandwidth and low-latency for HPC, ML, and analytics applications by bringing together the specialized requirements of HPC-optimized fabrics with the ubiquity of Ethernet. This delivers a converged infrastructure with high-performance on both HPC simulation codes and native IP applications, with efficient scalable access to data sources.
Building on Cray's specialized silicon, HPE Slingshot delivers consistent performance and low latency under load and at scale. This prepares you to efficiently service increasingly diverse users taking advantage of your HPC resources, and do so without overprovisioning bandwidth or deploying multiple systems to avoid congestion on your most demanding workloads.
Do your HPC and AI applications need ever-increasing networking performance to address problems with massive datasets, and complex and highly-parallelized algorithms in an extreme-scale system? The HPE InfiniBand NDR Switches provide increased processing power and TOR density by delivering an impressive 64 ports of 400 Gb/s InfiniBand per port in a standard 1U design. Obtain ultra-fast processing by utilizing NVIDIA® In-Network Computing technologies with SHARPv3 (the 3rd generation of NVIDIA SHARP technology). With NVIDIA port-split technology and support for up to 128 ports of 200 Gb/s, HPE InfiniBand NDR Switches can enable small- to medium-sized deployments to scale with a two-level fat-tree topology while reducing requirements for power, latency, and space.
Does your high performance computing (HPC) data center require high speed fabric infrastructure?
The HPE Apollo 100GbE 48-port Intel Omni-Path Architecture Unmanaged Switch is an integrated option in the HPE Apollo 6000 Gen10 System with 24 downlink ports and 24 uplink ports. It cost-effectively supports large HPC clusters and delivers an exceptional set of high-speed connectivity features and functions. It is ideal for customers who deploy HPC clusters based on the HPE Apollo 6000 Gen10 Systems using the Intel® Omni-Path technology.
Do your HPC and AI applications need ever-increasing networking performance to address problems with massive datasets, and complex and highly-parallelized algorithms in an extreme-scale system?
NVIDIA Networking for HPE includes NVIDIA® Spectrum-X SN5600 Switch which is compatible with standard Ethernet fabric and provides accelerated ethernet to your data center without compromising between performance and feature set. The NVIDIA Spectrum-X SN5600 Switch offers intelligent algorithms and efficient resource sharing to enable high performance, consistent low latency, and support for advanced data center networking features, making it ideal for cloud networks and end-to-end data center fabrics. NVIDIA Networking for HPE features configurable 800 GbE ports in a dense 2U form factor and can support up to 128-ports of 400 GbE with bidirectional switching throughput of 51.2 Tb/s to easily address your data center networking requirements.
*All pricing displayed is indicative; the reseller sets the final transactional price and may include other fees such as sales tax/VAT and shipping. The transactional price set by the reseller may vary from other resellers and the indicative price displayed. Indicative pricing may include limited-time promotional offers. HPE reserves the right to make pricing adjustments at any time for reasons including, but not limited to, changing market conditions, product discontinuation, restricted product availability, promotion end of life, and errors in advertisements.