Loading...

More Information

  • Support for HPE Apollo 35 systems
  • Faster MPI performance for systems with NVIDIA® GPU with Mellanox® InfiniBand remote direct memory access (RDMA)
  • Socket splitting option allows users to automatically divide processes between CPU sockets instead of packing one socket first

Customized MPI Library

The HPE Message Passing Interface (MPI) includes an MPI library so you can take full advantage of the underlying server infrastructure.

Supports most major interconnects and fabrics even over multiple generations to efficiently manage MPI traffic for improved performance.

Tune MPI Application Runtime Performance

The HPE Message Passing Interface (MPI) boosts performance of any MPI at runtime without the need to recompiling your code. Support for third party libraries includes: Cray MPI, Intel MPI, IBM Spectrum MPI, OpenMPI, Mellanox X-MPI, MPICH, MVAPICH.

Improved job management with optimized job placement as well as prevention of the MPI process migration.

Includes profiling tools to identify performance bottlenecks and load imbalances for MPI applications, in addition to identifying guided placement for threads to improve application performance.

[1] Includes support for multi-rail Intel® Omni-Path and Mellanox® InfiniBand, HPE Superdome Flex Grid and TCP/IP.

ARM is a registered trademark of ARM Limited. Intel is a trademark of Intel Corporation in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. All other third-party trademark(s) is/are property of their respective owner(s).

* Estimated price, value-Added tax included.

How can we help

Get advice, answers, and solutions when you need them. For general questions, call us at +1 (650) 687-5817.

Max 4 items can be added for comparison.