Qualcomm Cloud AI 100 Accelerator for HPE
Does your organization require optimized infrastructure for deploying AI/ML trained models at the edge? The Qualcomm Cloud AI 100 Accelerator for HPE dramatically improves server performance on AI/ML inference workloads for faster insights and higher accuracy data transformations on computer vision, natural language, and other state-of-the-industry AI/ML use cases. Based on the Qualcomm AI Core, this server accelerator is tailored for inference, supporting model customization and quantization to lower precision datatypes for optimized performance and industry-leading power efficiency. With low power consumption, the Qualcomm Cloud AI 100 Accelerator for HPE can increase performance while lowering operating cost.
SKU # R9Q09A
Contact us
Chat with usMaximize your Qualcomm Cloud AI 100 Accelerator for HPE
What's New
- Compact, single-slot, low-profile, and low-power PCIe accelerators is optimized for AI interference infrastructure at the edge.
- Based on Qualcomm AI Core, Inference Optimized Architecture leverages over a decade of power-efficient deep learning technology for inference computing.
- Achieves impressive peak TOPS (trillion operations per second) performance of 350 Int8, 175 FP16.
- High performance can dramatically lower infrastructure acquisition costs.
Purpose-designed for AI/ML workloads, the Qualcomm Cloud AI 100 Accelerator for HPE is a market leader in performance and power efficiency based on industry-standard benchmarks including MLPerf.
Show more