Speakers
Petter Tommeraas, managing director data centre services – Aker Horizons ASA (moderator)
Phil Lawson-Shanks, chief innovation officer – Aligned
Vladimir Prodanovic, principal program manager – Nvidia
Abed Jishi, chief technology officer – Hscale
Dean Nelson, founder and chairman – Infrastructure Masons
Tommeraas opened with a stark reflection on how far the industry has come: from servers stashed in hallways in the 1990s to today’s megawatt-scale deployments.
“The last two years have seen more change in the data centre industry than the previous 25 combined, especially when you think about cooling and power,” he said, setting the tone for the discussion to come.
From 7MW to 24MW: Scaling up and speeding ahead
Vladimir Prodanovic, principal program manager at Nvidia, detailed the rapid evolution of GPU-based workloads and the resulting demand for ever-denser infrastructure.
“In 2023, we saw colour clusters reach 9 to 12 megawatts, and now we’re heading beyond that. With rack densities approaching 100kW, and some deployments using 576 GPUs, the scale is unprecedented,” said Prodanovic.
This transformation, he noted, is reshaping how data centres are built. “We used to aim for 300 to 600kW rows. Now we’re designing for megawatt rows, which might host eight racks today but could be reconfigured for just two super-dense racks in two years.”
Balancing power, purpose and practicality
Dean Nelson, founder and chairman of Infrastructure Masons, offered a dose of realism about the long-term viability of such infrastructure. Through his company, Kato Digital, he’s advocating for extended lifecycle use of older chips, especially for inference workloads.
“Training requires massive GPU clusters, but 90% of future workloads will be inference,” said Nelson. “You don’t need a Ferrari for that. Enterprises will want cost-effective, secure, fine-tuned models running on second-life GPUs.”
Nelson also stressed the practical implications of increased density: “A single 80kW GB200 rack still produces over 20kW of air-cooled heat. That’s higher than the average across today’s data centres. We’re going to need a blend of air and liquid cooling for the foreseeable future.”
Adaptive, modular, and dual-purpose by design
Phil Lawson-Shanks, chief innovation officer at Aligned, explained how his team is adapting to this fast-moving landscape with modular infrastructure.
“We’ve designed our systems to support both liquid and air cooling from the outset. That flexibility allows us to quickly repurpose a CPU hall for high-density AI,” said Lawson-Shanks.
“One client started with a 6MW hall and came back asking for 9MW. We know they’ll eventually want 12 or even 24MW. Our adaptive modularity allows us to keep up.”
He noted that liquid cooling is no longer an exception but a growing expectation, particularly as hyperscalers seek latency-sensitive inference capabilities closer to users. “This resurgence of the edge is going to be denser than anything we’ve seen,” he added.
Intelligent operations: From humans to AI agents
As power and cooling systems become more complex, operational agility is under the spotlight. Abed Jishi, CTO at Hscale, warned that traditional responses are too slow for today’s high-density environments.
“With liquid cooling and compact racks, the margin of error is razor-thin. We’re embedding AI agents directly into operations to monitor workloads and environments in real time,” he said.
However, Jishi emphasised that automation won’t reduce staffing needs; rather, it changes the skillsets required. “Tomorrow’s data centre engineers need to understand electrical and mechanical systems, but also software and data analytics.”
Prodanovic echoed the call for upskilling. “We’re seeing sovereign AI cluster initiatives globally, and their first question is always about training people. Our reference designs for power, cooling, and monitoring are all freely available to those under NDA — but the know-how needs to be shared and localised.”
The challenge of standardisation and risk
Another recurring theme was the lack of unified standards in emerging high-density architectures. “There isn’t yet a universally agreed design for the secondary liquid loop,” said Lawson-Shanks. “Everyone’s tweaking things — from pressure to return temperatures. It’ll take a few design cycles and maybe some talent swaps between hyperscalers before a consensus forms.”
Meanwhile, increasing complexity raises liability concerns. “As providers begin delivering liquid all the way to the rack, operational boundaries blur. Who’s responsible when equipment fails?” asked Nelson. “We need to redesign SLAs and ensure technicians are ready to handle this new level of risk.”
Sustainability: Still on the agenda?
Despite the excitement around AI, panellists were unanimous that sustainability must remain central. “There’s a real risk the AI revolution could derail net-zero targets — at least in the short term,” warned Tommeraas.
Lawson-Shanks acknowledged the tension, but said sustainability remains non-negotiable. “We take it seriously, from embodied carbon in concrete to lifecycle tracking of our generators. Some clients now demand environmental product declarations (EPDs) from their vendors, and that pressure is pushing the supply chain to evolve.”
Looking ahead: More density, more diversity
If there’s one clear takeaway from the panel, it’s that tomorrow’s data centres must be more adaptable, intelligent, and inclusive than ever before.
Whether through modular design, GPU repurposing, advanced cooling integration, or AI-assisted operations, data centre leaders are being challenged to think beyond traditional models. As Nelson put it, “Inference will be the everyday workload, used by every enterprise. We’re only just scratching the surface.”
And as Prodanovic concluded, there’s no silver bullet. “We don’t believe a single design will win. It’s about knowledge sharing, flexibility, and being ready to evolve — because the pace of change is only accelerating.”





