The move is part of the launch of NVLink Fusion, a new interconnect chip designed to enable semi-custom AI infrastructure.
Unveiled at Computex 2025, the technology allows CPU and GPU makers — including MediaTek, Marvell, and Astera Labs — to tightly integrate their silicon with Nvidia’s, scaling performance for workloads such as large model training and agentic AI inference.
“A tectonic shift is underway: for the first time in decades, data centres must be fundamentally rearchitected, AI is being fused into every computing platform,” said Jensen Huang, founder and CEO of Nvidia.
“NVLink Fusion opens Nvidia’s AI platform and rich ecosystem for partners to build specialised AI infrastructures.”
Nvidia touts the new interconnect offering as a way for so-called AI factories, or specialist AI data centres, to scale out to millions of GPUs using any ASIC, combined with Nvidia’s end-to-end networking stack.
“With the ability to connect our custom processors to Nvidia’s rack-scale architecture, we’re advancing our vision of high-performance, energy-efficient computing to the data centre,” said Cristiano Amon, president and CEO of Qualcomm Technologies.
“Directly connecting our technologies to Nvidia’s architecture marks a monumental step forward in our vision to drive the evolution of AI through world-leading computing technology — paving the way for a new class of scalable, sovereign and sustainable AI systems,” said Vivek Mahajan, CTO at Fujitsu.
Nvidia has traditionally kept its interconnect technology in-house, which led to rivals teaming up to launch an open alternative to Nvidia’s proprietary technology.
In addition to opening up support for partner CPUs, Nvidia revealed that its new Grace CPU C1 has been expanded to support edge, telco, storage and cloud deployments.

The C1, the chipmaker’s upcoming server-grade CPU offering, was front and centre at Computex, with Nvidia touting it as a viable hardware solution for distributed and power-constrained environments.
Nvidia claims its CPU boasts a claimed 2x improvement in energy efficiency compared with traditional CPUs and can be combined with its RAN offerings to power distributed AI-RAN.





