*RRP - Reseller Recommend Price. Prices may vary based on local reseller.Prices provided in quotes by local resellers may vary.Show MoreShow Less
x
Finance your purchase through HPEFS
Continue through checkout to submit a purchase request and select 'leasing' as your preferred method of payment. Alternatively, you may click on 'Get Quote' to receive a quotation that includes financing provided by HPEFS.
OR, call the HPEFS PC Express team at 1-888-277-5942
Estimated monthly payment based on 36 month fair market lease.
Financing and service offerings available through Hewlett-Packard Financial Services Company and its subsidiaries and affiliates (collectively HPFSC) in certain countries and is subject to credit approval and execution of standard HPFSC documentation. Rates and terms are based on customers credit rating, offering types, services and/or equipment type and options. Not all customers may qualify. Not all services or offers are available in all countries. Other restrictions may apply. HPFSC reserves the right to change or cancel this program at any time without notice.
Loading...
https://connect.hpe.com/e/f2?nocache
en
Our system could not confirm your address to be valid and cannot find a recommended alternative. It is strongly recommended that you edit the address and try again. You may also continue with the address as you entered it if you are sure it is correct.
Provided User Id doesn't have access to this country, Please try sign in to authorized country partner portal.
Partner doesn’t have any country associated. Please contact System administrator.
Your Email/Password combination is incorrect. Please try again.
We've identified you as an Partner Store customer accessing HPE Storefront. Please 'click here' to log in to the Partner Store.
We've identified you as an HPE storefront customer accessing the Partner Store. Please 'click here' to log in to the HPE storefront.
Login Error
The requested account is an existing enterprise account. Please click here to login to enterprise store.
Sign-in Error
The associated account is connected to an employee profile. Please either register or use an alternative account to log in to the storefront.
Thank you for filling out your profile information. To complete your registration, please check your inbox for an email from Hewlett Packard Enterprise (HPE) and follow the validation steps.
Do you need to streamline the AI/ML deployment process? Do you need to support a diverse AI frameworks and scalable infrastructure in a cloud/hybrid environment that often requires customized data protection? The HPE Machine Learning Inference Software features user-friendly tools to update, monitor, and deploy models that will help you get value from AI/ML initiatives faster. Role-Based Access Controls (RBAC) and endpoint security provide additional protection for ML resources. Dramatically improve team efficiency by using consistent tooling and pre-trained models to focus more on model development and less on the complexities of getting models into production. By offering a product that handles the intricacies of deployment, routing, and real-time monitoring, HPE Machine Learning Inference Software provides the agility needed to ship ML models quickly, iterate on them based on feedback from the real-world, and maintain high-performance standards.
Get Started
Financing available through HPEFS
Offered by HPE Reseller
Maximize your HPE Machine Learning Inference Software
What's New
Create a simplified path to scalable production model deployments for MLOps or ITOps, using an intuitive graphical interface removing the need for extensive Kubernetes experience.
Streamlined integration with Hugging Face and NVIDIA Foundation Models offers a zero-coding deployment experience for large language models (LLMs) directly from Hugging Face and NVIDIA NGC.
Seamless integration with NVIDIA AI Enterprise® includes NIM® microservices for enhanced inference on more than two dozen popular AI models from NVIDIA and partners.
Facilitate support for pre-trainied and bespoke models built on popular frameworks such as TensorFlow, PyTorch, scikit-learn, and XGBoost.
Benefit from integrated monitoring and logging for tracking model performance, usage metrics, and system health, facilitating proactive optimization.
Offer adaptable deployment across varied infrastructures with compatibility for many Kubernetes environments, including HPE Ezmeral, HPE GreenLake, AWS, Azure, Google Cloud, and on-premise setups.
Predictable, Dependable, Protected, and Monitored Deployment for Diverse Environments
HPE Machine Learning Inference Software can deploy models using an intuitive graphical interface and scale deployments based on load.
Customize performance with real-time monitoring of models and track predictions and statistics around deployment.
Whether in an existing Kubernetes cluster, a private cloud, or even a hybrid cloud, HPE Machine Learning Inference Software provides consistent tooling across continually modernizing systems to meet your needs.
Industry-standard Helm charts are used to deploy into any Kubernetes-compatible platform, e.g., OpenShift, Rancher, EKS, AKS, or GKS—any cloud can be leveraged consistently.
Out-of-box Support for NVIDIA Models and Tools
HPE Machine Learning Inference Software offers flexible, first-class support for Nvidia GPUs with architecture to easily add support for continually-modernizing systems.
Integration with NVIDIAs’ AI Enterprise (NVAIE) software suite, NVIDIA Inference Microservice (NIM) (utilizing Triton, TensorRT-LLM) and other AI inferencing techniques offer enhanced performance.
Built-In Enterprise-Class Security
HPE Machine Learning Inference Software features execute workloads in your preferred environment, including cloud, hybrid, on-premise, or even air gaped—thus enabling models, code, and data to remain protected.
Use Role-Based Access Controls (RBAC) to authorize development and MLOps teams to collaborate and share ML resources and artifacts securely.
Protect deployment endpoints with enterprise-class security features that require advanced authentication, including OIDC and OAuth 2.0, to interact with models.
Broad Model Compatibility
HPE Machine Learning Inference Software offers streamlined integration for specific large language models (LLMs) directly from Hugging Face and NVIDIA Inference Server (NIM) while enabling development of models from most frameworks.
Achieve increased flexibility using models from diverse frameworks such as TensorFlow, PyTorch, Scikit-Learn, and XGBoost to accommodate a broad range of pre-trained and customer models.
Kubernetes is a registered trademark of Google LLC. NVIDIA® and CUDA are registered trademarks of NVIDIA Corporation in the U.S. and other countries. All other third party trademarks are property of their respective owners.