Shipments are expected to start in 2026. The contract covers Nvidia’s GPUs as well as networking and specialised inference chips, which AWS plans to use to accelerate large-scale AI workloads across its cloud services.
Nvidia’s technology has long been a cornerstone for hyperscalers, and the scale of this deal underlines its dominance in high‑performance AI compute.
The partnership builds on Nvidia’s existing footprint in government and enterprise clouds. Analysts note that such long-term supply agreements are becoming crucial as providers race to meet growing demand for AI-powered applications without hitting hardware bottlenecks.
AWS has balanced the deal with ongoing investment in its own silicon, but the agreement with Nvidia ensures immediate capacity for AI training and inference workloads. Industry sources say this mix of Nvidia and in-house chips gives AWS flexibility to scale AI services efficiently while managing costs and operational risk.
“It’s a substantial commitment from both sides, reflecting how critical reliable hardware supply is to AI operations,” said an industry analyst familiar with the deal.
The arrangement reflects a wider industry trend: hyperscale cloud providers are securing long-term commitments for specialised chips to underpin AI infrastructure.
For operators, it highlights the growing interdependence between software, AI models, and the underlying compute, where supply chain and performance considerations are as strategic as software capability.
For AWS, the deal ensures a steady pipeline of AI-capable hardware to support its expanding portfolio of generative and inference services, reinforcing its position in the increasingly competitive cloud AI market.
RELATED STORIES
Nvidia’s Kanika Atri: AI-RAN, automation and building the road to 6G networks
Nvidia CEO predicts $1tn compute demand as AI reaches ‘platform shift’

ITW 2026
Over 2000 organisations from 120 countries made their mark at ITW 2025, powering the future of global connectivity and digital infrastructure.





