Expanding a partnership that already consists of billions of dollars in commitments, Meta will leverage CoreWeave’s AI cloud platform to scale inference workloads.
The goal is to increase support for Meta’s development and deployment of AI, which is particularly timely, given that the social media giant has recently announced Muse Spark, its new AI model.
“This is another example that leading companies are choosing CoreWeave’s AI cloud to run their most demanding workloads,” commented Michael Intrator, co-founder, CEO and chairman of CoreWeave.
The dedicated capacity will be deployed across multiple locations and will include some of the initial deployments of the Nvidia Vera Rubin platform. CoreWeave heavily uses Nvidia chips in its data centres, particularly as these particular chips have become highly sought-after across the technology industry.
Both companies said a more distributed approach is designed to optimise performance, resilience and scalability for Meta’s AI operations.
The new agreement is part of a broader industry shift, where demand for high-performance infrastructure continues to accelerate – particularly if it supports increasingly complex, large-scale AI workloads.
Currently, Meta is one of the biggest AI infrastructure spenders, with the company budgeting billions of dollars in CapEx to fuel the technology, which includes accounting for data centre operating expenses.
The company has also committed to scaling its own footprint to rely on third-party cloud providers less. This has included plans for a new series of internally developed AI chips, as the company moves to strengthen control over the infrastructure powering its rapidly expanding AI workloads.
Related stories
CoreWeave boosts major AI partnerships as revenue surges in Q3
Meta’s $2bn AI deal under China’s microscope
Unpacking Meta’s new Indiana data centre and its sustainable AI promise






