Announcement

Free to view yesterday and today
Customer Service: cat_manager

Project HAMi: Heterogeneous AI Computing Virtualization Middleware for Kubernetes

Project HAMi is a CNCF sandbox project providing a virtualization middleware for heterogeneous AI computing resources, enabling efficient sharing and management of GPUs, NPUs, and other AI accelerators in cloud-native environments like Kubernetes.

Go
Added on 2025年6月11日
View on GitHub
Project HAMi: Heterogeneous AI Computing Virtualization Middleware for Kubernetes preview
1,739
Stars
309
Forks
Go
Language

Project Introduction

Summary

Project HAMi is an open-source, vendor-agnostic middleware designed to virtualize and manage heterogeneous AI computing resources within Kubernetes clusters. It acts as a layer between container orchestration and AI hardware, facilitating efficient resource sharing, scheduling, and utilization for AI/ML workloads.

Problem Solved

Managing diverse and specialized AI computing hardware (like GPUs, NPUs) in dynamic cloud-native environments, especially with multiple users or applications needing concurrent access, is complex. Traditional methods lack flexible resource sharing, fine-grained allocation, and unified management across heterogeneous devices within orchestrators like Kubernetes.

Core Features

Heterogeneous Resource Virtualization

Virtualizes heterogeneous AI hardware (GPUs, NPUs, etc.) for fine-grained resource allocation and sharing among multiple containers.

Kubernetes Native Integration

Provides a Kubernetes device plugin that integrates seamlessly with K8s scheduling to manage AI accelerators.

Resource Isolation and Multi-tenancy

Enables multi-tenancy and isolation of AI resources, allowing different users or applications to securely share the same hardware.

Tech Stack

Go
Kubernetes Device Plugin API
Container Runtime Interface (CRI)
gRPC
Specific Hardware APIs (e.g., CUDA, ROCm - depending on integration)

Use Cases

HAMi provides significant value in scenarios requiring efficient utilization and flexible sharing of diverse AI hardware in containerized environments:

Multi-tenant AI Cluster Resource Sharing

Details

In a multi-tenant Kubernetes cluster, multiple teams or users need access to a shared pool of heterogeneous AI accelerators (e.g., NVIDIA GPUs, Ascend NPUs). HAMi can virtualize these resources, allowing each user to request and receive a fraction or specific type of acceleration power without interfering with others.

User Value

Improved resource utilization, reduced hardware costs, secure multi-tenancy, and simplified user access to AI accelerators.

Orchestrating Diverse AI/ML Workloads

Details

A platform runs various AI/ML applications (training, inference, data processing) with different hardware requirements. HAMi enables centralized scheduling and allocation of the most suitable heterogeneous resource to each application's container.

User Value

Efficient scheduling, optimal hardware matching for workloads, and streamlined deployment of heterogeneous AI tasks.

Recommended Projects

You might be interested in these projects

HKUDSAutoAgent

Discover AutoAgent, a cutting-edge framework that empowers users to build and deploy powerful LLM agents without writing a single line of code. Streamline complex workflows and automate tasks with intuitive, visual interfaces.

Python
5085717
View Details

bufbuildprotoc-gen-validate

This repository contains `protoc-gen-validate`, a Protocol Buffer compiler plugin that generates rigorous message validation code. **Note: This project is now superseded by `bufbuild/protovalidate`. New projects should use `protovalidate` instead.**

Go
3952595
View Details

grpcgrpc-java

The official Java library for gRPC, providing a high-performance, open-source universal RPC framework based on HTTP/2 and Protocol Buffers.

Java
117253896
View Details