AI

Nvidia CEO predicts $1tn compute demand as AI reaches ‘platform shift’

16 March 2026
7 minutes
Jensen Huang redefines AI, data and compute at Nvidia GTC 2026, sharing that the industry is transitioning towards a new compute platform shift as demand soars.
Image: Nvidia
Image: Nvidia

Nvidia continues to thrive under founder and CEO Jensen Huang’s leadership, with the continued AI boom to thank for its success.

Taking the stage in San Jose for Nvidia GTC 2026, Huang shared the company’s latest breakthroughs in AI and accelerated computing, demonstrating how agentic AI and AI Factories could power the next generation of intelligent systems.

One such revelation was that Nvidia expects to make US$1 trillion from AI chips through 2027.

“AI is able to do productive work, and therefore, the inflection point of inference has arrived,” Huang explained. “AI now has to think. In order to think, it has to inference … it’s way past training now.

“AI is going to be much much faster than us.”

The CPU comeback: Huang calls for ‘new approach’ to ‘Moore’s Law’

Huang explained during his keynote that the most challenging thing to achieve is installed base, but – after 20 years – Nvidia is now accelerating the installed base of CUDA (Compute Unified Device Architecture), its proprietary parallel computing platform and programming model.

Somewhat of a ‘CUDA renaissance’, Huang explained that now CUDA is integrated into every single ecosystem.

“Driving down computing cost ultimately encourages new growth,” he explained. “When you accelerate data processing, when you accelerate computing, you get the benefit of speed, you get the benefit of scale. Most importantly, you also get the benefit of cost.

“All of these come together as one.”

To enhance this, Nvidia is working with IBM to accelerate watsonx with Nvidia cuDF to support Nestle’s supply chain – something that Huang said is accelerated computing for the era of AI.

“Data is the ground truth that gives AI context on it,” he explained. “AI needs rapid access to massive datasets. Today’s CPU data processing systems can’t keep up.”

He added: “With accelerated watsonx, data running on NVIDIA GPUs, Nestle can run the same workload five times faster at 83% lower cost. The next computing platform has arrived, accelerated computing for the era of AI.”

Working with the Dell AI Data Platform, Nvidia is able to integrate Nvidia cvUS and Nvidia cuDF. Huang explained that, with the acceleration of data processing, the benefits are scale and cost.

“Moore’s Law has run out of steam. We need a new approach,” he said. “Accelerated computing allows us to take these giant leaps forward.

“Nvidia is an algorithm company … because our reach is so large and our installed base is so large, we can reduce the computing cost, increasing the scale, increasing the speed for everybody continuously.”

Turbocharging the building blocks of AI

Huang prided Nvidia on being able to support every industry, including the telco industry – which he described as “one of the world’s infrastructures” that will be “completely reinvented” as a “future AI infrastructure platform,” acknowledging the company’s AI-RAN partnership.

“We are an algorithm company. That’s what makes us special,” he added. “[Our libraries] are the crown jewels of our company.”

As the scale of investment continues to soar, Huang said Nvidia continues to partner with AI native companies as they need lots of compute to grow. Within this, he explained how the industry is experiencing a strong AI-led transition.

“We are now at the beginning of a new platform shift,” he said. “Everything that we do is changing … the meaning of computing altogether.”

On the 2027 roadmap, Huang said he could see at least US$1 trillion in computing demand in the year ahead.

He added: “I believe that computing demand has increased by 1,000,000 times in the last two years. Breakthroughs leads to entirely new markets, which builds new ecosystems around them with other companies that join, which creates a larger installed base.

Some of Nvidia’s latest chip announcements included the Vera Rubin, which Huang described as “super-charging the era of agentic AI.”

He said: “Seven chips, five rack scale computers, one revolutionary for agentic AI. 40,000,000 times more compute in just ten years.”

Nvidia has designed the new CPU for extremely high single-threaded performance, with high data output. Huang said it was not only good at data processing, but has been optimised for “extreme” energy efficiency.

“AI wants the tools to be as fast as possible,” he said. “It’s cooled by hot water, which takes the pressure off of the data centre, takes all of that cost and all of that energy that’s used to cool the data centre and makes it available for the system.”

He added: “We’re the only company in the world that has built a sixth generation scale up switching system … This is sure to be a multi-billion-dollar business for us.”

Likewise, the Rubin Ultra chip will enable Nvidia to connect 144 GPUs in one domain.

“This is the most important … for the future of AI Factories,” Huang said. “This is the power of extreme co-design.”

Partnerships continue to drive progress

Huang’s comments come as Nvidia is in the middle of a dealmaking spree, including its agreement with AI inference chip designer Groq. Nvidia and Nebius also strategically partnered last week to develop and deploy the next generation of hyperscale cloud for the AI market.

More partnerships announcements are expected to come as the GTC event continues.

Nvidia and its partners are working to build an AI ecosystem to facilitate AI infrastructure worldwide. This is to ensure maximum resiliency, efficiency and throughput.

“We want to make sure that these AI Factories come together, designed in the best possible way,” Huang said. “Most of us technology vendors, in the past, never met each other until the data centre. That can’t happen [anymore].”

He added: “We have a great ecosystem of partners … Nvidia is a platform company. We support every phase of the AI lifecycle.”

During his keynote address, he also explained Nvidia as the world’s “first vertically integrated, but horizontally open computing company”.

He explained: “There is no other way. The only way for us to accelerate applications going forward and continue to bring tremendous speed up, tremendous cost reduction is through application or domain specific acceleration.”

As the AI industry continues to boom, Huang is eager to keep positioning Nvidia as the ‘middleman’ to turbocharge the technology. A significant part of that, he said, is accelerated computing.

“This is going to become a multi trillion dollar industry offering not just tools for people to use, but agents that are specialised,” he explained.

“Today, we’ve reinvented computing. This is the beginning of something very, very big.”

Related stories

Washington: The AI Gatekeeper? US NVIDIA chip rules could threaten digital sovereignty

Salesforce & Nvidia CEOs dismiss ‘SaaS-pocalypse’ fears

Nvidia CEO: AI threat to software industry is “illogical”

The ‘Rubin Revelation’: Nvidia CEO remarks cause data centre cooling stocks to fall

 

Datacloud Global Congress

Industry heavyweights from the data centre sector will convene at this year’s Datacloud Global Congress, where power and energy management will be a key focus. With AI workloads driving record electricity demand, the event will spotlight strategies for self-generated power, grid resilience, and sustainable energy integration. Attendees can expect in-depth discussions on how recent initiatives, such as the Ratepayer Protection Pledge, are reshaping the economics and operations of hyperscale and edge data centres worldwide. To register for the event, click here.

Datacloud Global Congress 2026

02 June 2026