Does your enterprise need to simplify management, reduce costs, and improve reliability and performance for high-performance computing (HPC) and AI workloads?
Built for the exascale era, the HPE Apollo 6500 Gen10 Plus System accelerates performance with NVIDIA® HGX A100 Tensor Core GPUs and AMD Instinct™ MI100 with Infinity Fabric™ accelerators to take on some of the most complex HPC and AI workloads. This purpose-built platform provides enhanced performance with premier graphics processing units (GPU), fast GPU interconnect, high-bandwidth fabric, and configurable GPU topology, providing rock-solid reliability, availability, and serviceability (RAS). Configure with single or dual processor options for a better balance of processor cores, memory, and I/O. Improve system flexibility with support for 4, 8, 10, or 16 GPUs and a broad selection of operating systems and options, all within a customized design to reduce costs, improve reliability, and provide leading serviceability.
Do you need increased computing performance for high performance computing (HPC) and deep learning?
The HPE Apollo 6500 Gen10 System is an ideal HPC and deep learning platform providing unprecedented performance with industry-leading1 GPUs, fast GPU interconnect, high bandwidth fabric, and a configurable GPU topology to match your workloads. The ability of computers to autonomously learn, predict, and adapt using massive data sets is driving innovation and competitive advantage across many industries and applications are driving these requirements. The system with rock-solid reliability, availability, and serviceability (RAS) features includes up to eight GPUs per server, NVLink for fast GPU-to-GPU communication, Intel® Xeon® Scalable processors support, choice of high-speed / low latency fabric, and is workload enhanced using flexible configuration capabilities. While aimed at deep learning workloads, the system is suitable for complex simulation and modeling workloads.
* Prices may vary based on local reseller.
Find what you are looking for?