Data Centres

Meta outlines roadmap for new generation of in-house AI chips

12 March 2026
2 minutes
Meta has unveiled plans for a new series of internally developed AIchips as the company moves to strengthen control over the infrastructure powering its rapidly expanding AI workloads.
CM-Meta.png
CM-Meta.png

The social media and technology giant said it is developing multiple new generations of its Meta Training and Inference Accelerator (MTIA), custom silicon designed to run large-scale AI applications more efficiently inside its data centres.

The roadmap includes four upcoming chips, MTIA 300, 400, 450 and 500, which the company intends to deploy on an accelerated release cycle over the next several years. The latest generation will initially focus on inference workloads, the stage where trained AI models generate outputs such as recommendations, rankings or generative content.

Meta said its first MTIA deployments are already supporting recommendation systems across its platforms, including Facebook and Instagram, where AI models rank and personalise content feeds and advertising.

The new chips are expected to play a growing role as the company scales generative AI services and integrates large language models more deeply into its products. AI inference at Meta’s scale, serving billions of users, requires vast compute capacity across global data centre infrastructure.

By designing its own silicon, Meta aims to improve performance efficiency while reducing dependence on third-party suppliers. Most AI infrastructure today relies heavily on GPUs from vendors such as Nvidia, which have become the dominant processors for training and running large AI models.

Custom accelerators allow hyperscalers to optimise hardware for specific internal workloads, potentially delivering improvements in power efficiency, cost and throughput compared with general-purpose GPUs.

The MTIA architecture combines specialised compute units with high-bandwidth memory and networking capabilities designed for distributed AI processing across data centre clusters.

Meta has said the chips are intended to complement rather than replace external processors. The company continues to deploy large volumes of GPUs as it expands its AI compute footprint, but in-house silicon is expected to take on a larger share of inference tasks over time.

RELATED STORIES

Meta and Nvidia strike multi-billion-dollar deal

Meta’s Project Waterworth: The world’s longest and highest-capacity submarine cable

ITW 2026

19 May 2026

Over 2000 organisations from 120 countries made their mark at ITW 2025, powering the future of global connectivity and digital infrastructure.