Announced at Nvidia GTC this week, the goal is to deliver high-performance AI infrastructure with full tenant isolation and automated lifecycle control. This will be done by combining OpenNebula’s cloud management and virtualisation platform with Nvidia accelerated computing, networking and bare-metal lifecycle management solutions, enterprises, HPC centres and neocloud providers.
The news comes shortly after OpenNebula Systems had announced its validation by Nvidia to deliver a fully integrated, AI-ready cloud infrastructure built on Nvidia Spectrum-X Ethernet networking.
“AI factories require tight integration across compute, networking and infrastructure automation,” said Warren Barkely, VP of DGX Cloud at Nvidia. “OpenNebula’s work with Nvidia technologies, including NCX Infra Controller, helps organisations build scalable, multi-tenant AI environments with greater automation and operational control.”
The integrations cover GPU virtualisation, network offload and automated bare-metal provisioning.
OpenNebula supports Nvidia GB200 NVL4 GPUs via PCI passthrough to provide virtual machines with direct hardware access and preserve native performance. Likewise, multi-instance GPU (MIG) support will enable hardware-level partitioning, enabling multiple tenants to share accelerators securely while maintaining predictive resource allocation.
Nvidia BlueField DPUs are also fully managed within OpenNebula’s control plane to offload network and security tasks from host CPUs. The companies said per-tenant traffic enforcement, containerised network functions and programmable switching policies reduce CPU load, strengthen isolation and improve throughput across AI, telco and edge environments.
Additionally, a unified workflow, integrating Nvidia NCX Infra Controller, combines lifecycle automation with strong governance, supporting rapid scaling and consistent operations for enterprise, HPC and sovereign AI initiatives.
Demonstrations at Nvidia GTC will highlight how AI infrastructure can be deployed from bare metal to fully operational, multi-tenant AI cloud services while maintaining strict performance, isolation and governance standards.
“Our focus is to provide a unified control plane for AI infrastructure that preserves hardware performance while introducing strong governance and lifecycle automation,” said Ignacio M. Llorente, managing director at OpenNebula Systems.
“Through deep integration with Nvidia technologies, we enable organisations to deploy AI Factories that meet enterprise, HPC and sovereign requirements with predictable performance and operational control.”
Related stories
Nvidia CEO: AI threat to software industry is “illogical”






